AI and Food: Safer and More Tasty Food?

 

In February 2019, Tristan Greene wrote an article in The Next Web and quoted an IBM research study that suggested that artificial intelligence could improve the taste of food by creating new hybrid flavors. It took a part of the Internet by storm, less for its clickbait headline and more for its actuality. Greene was writing facts when he began his article with this: “AI will soon decide what we eat”.

Let’s explore the what, the why, and the how. We are sure you already know the why so we’ll mostly skip it.

Artificial Intelligence + Food. Really?

That seems to be a sensible question but not a surprising one. AI and machine learning have already taken over the world with them influencing everything from blockchain to computer vision to chemistry. So why not food production?

Now IBM, other tech giants, and new startups are changing that by feeding AI systems millions of different types of data in the areas of sensory science, consumer preference and flavor palettes to help generate new or advanced flavours that can literally put your mouth on fire. Or make it drool all day. Or make even the most tasteless food taste like heaven. Kale and quinoa, anyone?

The food industry has already scrambled to use artificial intelligence and machine learning for its sake. Take, for example, the world’s first automatic flatbread-making robot called Rotimatic which limits user control to just putting the ingredients into the appliance. It does all the dirty work by itself and claims to bake hot flatbread in under a minute.

Not just kitchen appliances, the food that we eat and its ingredients are also being influenced by AI and other techniques even as we debate whether genetically modified food products are safe for human consumption. Researches involving changes in the cooking style, omission or replacement of certain ingredients, and others have all been suggested by AI-driven tools. While none of them have hit the shelves yet, this new tool by IBM looks like it’s just around the corner.

According to the study, IBM and a company pioneering in flavors and food innovation named McCormick & Company created a novel AI system whose aim is to create new flavours. Published in February 2019, the blog post promised that some of its findings will be available on the shelf by the end of the year. While it is September and we still wait, let’s have a look at the scope of AI in the food industry.

How Does AI Help Food Become Better?

To answer this question, Greene uses the analogy of Google Analytics tools. Publicly available data like recipes, menus, and social media content about these recipes along with trends in the food industry are fed to AI systems. These then generate fresh, actionable insights.

An example is a tool that can show restaurants what the most popular food will be every month for the next 12 months. If this is a possible scenario, the restaurant can prepare itself and maybe even surprise its customers into submission, eventually becoming popular and running a successful service.

The same goes for farming models where new techniques are needed to plant and grow more produce as the population gets out of the window due to lack of space. Everyone involved in researches dealing with AI and the food industry is positive about what can be done.

Existing data is of prime importance if such tools are to bear any results. In the above example involving IBM, the tool is able to create new flavors because of the existence of data on different flavours that we currently have. In a way, AI is only helping us discover flavors sooner.

AI Everywhere in the Food Industry

Till now, we spoke about the use of AI in farming, food recipes, and restaurants. But what about food processing? Media suggests that AI is everywhere – from its help in sorting foods to making supermarkets more super.

According to a Food Industry Executive, there are a lot of examples that highlight the significance of AI in the food industry. Some of them are listed below, thanks to Krista Garver:

  • Food sorting – AI helps understand which potatoes (by their size and quality and age) should be made into French fries and which ones are suitable for hash browns or potato chips or some other food. This involves the usage of cameras and near-infrared sensors to study the geometry and quality of fruits and vegetables
  • Supply chain management – This is obvious: food monitoring, pricing and inventory management, and product tracking (from farms to supermarkets)
  • Hygiene – AI can detect if workers are wearing all the necessary equipment. Since AI tools are fed data about what constitutes 100% hygiene, they can constantly check the attire of workers and rate them on the basis of their current clothing. Is a worker not wearing a plastic hat? An alert goes to his manager
  • New products – This is similar to the IBM example seen above. Predictive algorithms can be used to understand what flavors are most popular in people of certain age groups. Why do kids love Kinder Joy? What is or are the ingredients that make them go bonkers?
  • Cleaning – This is the most promising one where ultrasonic sensing and optical fluorescence imaging can be used to detect bacteria in a utensil; this information can then be used to create a customized cleaning process for a batch of similar utensils.

Conclusion

It is mind-numbing (mouth-watering, too?) to visualize these products actually coming into form in a few years. Which is why there is no doubt that AI will revolutionize the food market. The only question that then remains: has the revolution already begun now that you can’t say no to a bunch of addictive products?

A Look at Ideal Agile Implementation in an Organization!

A Look at Ideal Agile Implementation in an Organization!

They say Agile project management is the new normal. Despite its demerits, it has stood up as the most effective project management concept in the corporate world, managing to deliver work even when there’s no desired outcome specified. In today’s competitive professional environment, the mantra is to start working and worry about the output later.

In simple words, an Agile system involves producing and delivering work in short bursts and then refining it until it matches the client’s often ambiguous requirements.

Based on findings furnished by multiple reports in 2018, Agile is inching ahead as the most reliable project management tool, just behind predictive approaches like Waterfall. However, a majority of those who implement Agile into their workplace do not have much idea about what to expect, how much time it takes for the streamlining, and what an ideal system looks like.

This is why it is important to visualize an ideal Agile implementation framework. Let’s study the four major characteristics or outcomes of the Business Analyst Course in India that have been implemented in the right way. Starting with how success looks like…

Top Characteristics of Ideal Agile Implementation

It is better to understand what went wrong with the implementation in your organization rather than sitting with folded hands and waiting for a rescue team. Let’s have a look.

Agile Creates a Controlled Work Environment

Agile implementation in its ideal form gives more benefits to its users than other traditional project management tools. The biggest advantage is that even though employees might have to push themselves to complete a short burst of tasks for (say) two days rather than stressing over the same tasks over a few weeks, they will feel better after the tasks are done. This prevents burnouts and allows them to have better conditioning during their time off.

For example, in an Agile environment, a team of six employees will aim to complete a set of tasks between Monday and Friday. An ideal implementation will allow them to sign off on Friday and enjoy a weekend without the anxiety that may have arisen had they paused the work on Friday in an attempt to resume it on Monday.

Apart from this, a successful Agile implementation yields many more benefits from a human resources point of view:

  • Workplaces become less resistance to change
  • Employees have clearer career objectives
  • Simplified role transitions as tasks are designed in a way that allows anyone with little experience to assume new duties
  • Problem-solving environment

Allows Hybrid Systems

Any organization that deals with a large number of clients is bound to have some type of project management in place. Based on statistics, the most popular is the Waterfall project management system where it follows a hierarchical flow of duties.

In such scenarios, Agile does not act as a disruptive mechanism. Instead, it allows for collaboration which can only improve the productivity and overall morale of the workforce.

According to Agile coach Johan Karlsson, many aspects of hardware development benefit from Waterfall processes, whereas software development has much to gain from an Agile approach. This is where the hybrid mechanism bears fruit as companies can look at fully utilizing their resources.

In an ideal hybrid case, a Waterfall process is used on the top level and Agile is used for operations-level work. This ensures that there is less to no friction between teams as they embrace both the management techniques.

A Perfect Environment for Employees

According to a recent report by McKinsey and Co., the Agile environment is creating a war for talent. This has led to competitive hiring and an increase in the retention rate of talent that organizations feel are indispensable or “too great to lose.” This is a cause of the very characteristic of an Agile environment.

As is known, employees are given tasks based on their skillset. This means that an employee can choose to excel in a skill that she already has sufficient exposure in, thereby traversing to advanced levels. For instance, a Scrum Master can look at managing his team not just in an Agile environment but also in other project management that his organization may be using. This meta example shows how employees can climb the corporate ladder while focusing on and expanding their gained skillset.

Another major advantage that often gets overlooked is Agile’s ability to break complacency. In every work sector in the world, employees feel that they hit a state of complacency after some time in a specific role. Since Agile expects team members to extend their skillset based on the feedback loop with the client, employees stand to gain better work experiences.

Changes in the Organization Culture

In a mature Agile environment, the employees are not the only ones who receive the benefits. Everyone from the team members to the management to the human resources experiences a shift in how they work, which is often very sudden and different than what they had been doing in the past.

It not only engages every person involved in a company but also brings out the best in them. As a result, it transforms from being an average company to one that is full of energy and productivity.

Just as Agile uses a feedback system to improve project deliverability, its environment is based on the same system. Agile identifies great talent and that in turn helps the environment flow like a stream of water.

All said and done, it is pragmatic to note that Agile is a talent-based system. You cannot expect a company to move mountains with Agile implementation if the members do not possess the skills required to even initiate a project. Therefore, to implement an ideal Agile environment, talent acquisition is an essential requisite.

What have been your experiences when it comes to Agile implementation? Share them in the comments section below to start a discussion.

Artificial Intelligence as an Anti-Corruption Tool

Artificial Intelligence as an Anti-Corruption Tool

It is obvious why anyone would want to put a cork on corruption and then throw it into space to disintegrate itself. So, when a group of scientists from the University of Valladolid in Spain put together a computer model that can predict instances of graft in government agencies, the whole world took notice.

Here are some of the top takeaways from the study that was published in FECYT – Spanish Foundation for Science and Technology. It was first published online in Springer on 22 November 2017.

What was the study about?

According to the research paper published in Springer, the study created a computer model based on neural networks that would send out warnings for possible instances of graft occurring in a government office. These warnings can then be used for corrective and preventive measures, which in other words, means changing the way a government functions or weeding out certain bad apples, for lack of a better term.

The model in the study uses corruption data extracted from several provinces in Spain where graft occurred between 2000 and 2012. A lot of different factors were at play that defined how a routine incident of corruption would occur in any given government agency.

The researchers’ aim was to understand what factors play a key role and then administering changes to those factors in an attempt to eradicate corruption. Of course, this process will be iterative, as no “bad habit” can be weeded out in a single go.

What were the findings?

According to the paper published by the researchers, the following are the key takeaways. Since Spain follows a customized form of parliamentary monarchy, it can be easily translated and adapted for other similar governments. Of course, the data will need to be updated.

  • Public corruption is a cause of multiple factors such as:
    • Taxation of real estate and a steady increase in property prices
    • Nature of economic growth, GDP rate, and inflation
    • Increase in the total number of non-financial institutions
    • Sustenance of any single political party for a “long time”
  • Since data on actual cases of corruption was used to create the model, it provides for a better look at the factors compared to how the model would have been had it depended solely on the perception of corruption. Such studies have not yielded much either have they garnered any interest from the public
  • Corruption can be predicted three years before they are bound to happen

Where does AI come in the picture?

Since all of this sounds too good to be true, it is wise to ask what the role of artificial intelligence is in this study. In order to do that, let us go back to other studies that have tried to predict corruption.

All of the previous studies on the subject have depended on data that were more or less subjective indexes of perception of corruption, reports Science Daily. What this means is that the only type of data being used is something that is available on the public domain. Since the government can sometimes come in between the sourcing of this data by private agencies like Transparency International, the database stops becoming useful. All it will give the model is data that does not reflect the true gravity of the situation.

On the other hand, when actual data is used, there is much for artificial intelligence to feed itself and then bring out a model that can be used to predict the very nature of corruption. This is where this new study excels when compared with historical reports.

The biggest role of artificial intelligence in this exercise is to find a correlation in a set of data through a process that attempts to mimic the human brain functionality. If we feed human being scores of detailed court cases about corruption charges on government employees, he will take decades to dig through them and still end up without an actionable conclusion.

A neural network, on the other hand, analyses the data, studies its various factors, and creates a relationship between them to see what connects with what. Let’s take a rough example to make this clear.

If out of 10 cases of corruption, 8 of them involves a particular modus operandi and a similar cause for the exchange of money to take place, then such a connectionist system will flag that as a recurring factor. This information, along with several others, is then used to reach a conclusion. Of course, this example is an imaginary one, and the amount of data fed in the study were a lot higher, which makes the result even more constructive. Over 12 years of data of actual corruption cases are bound to give such an AI tool enough ground datum to work with. But, it should still be noted that no amount of data is sufficient when one is trying to execute the predictive analysis.

Finally, according to the Chr. Michelsen Institute, this type of predictive tool that depends on patterns can well be a smart anti-corruption tool. Its ability to handle big data and detect anomalies in it is what makes it a promising new system that could be adopted by late the 2020s. However, it also points out the single biggest concern over its use: it will force for more surveillance in the world as data about even the smallest corruption cases will be added into the system. Data that will include personal details of individuals.

Conclusion

It is a great relief that AI in theory at least does not mimic the dangers portrayed by science-fiction films and is being seen as a technology that can help humanity lead a better life. Its usage in anti-corruption neutral networks is a step in the right direction, and with added research, it will pave way for better governance. Where those being governed have one less accusation to make against the government.