artificial intelligence drugs

Artificial Intelligence for new drugs discovery

Biomedical science innovation based on AI technology is the long-awaited opportunity for achieving higher effectivity in this industry. New drug development through R&D innovation in a shorter time and at a lower cost is the Holy Grail of the biopharmaceutical industry.

Scientific innovation not only involves finding the molecular mechanism of a disease but also the development of new drugs for the cure, palliation or prevention of diseases.

Innovation in the pharmaceutical industry costs over 2.400,00 million euros, according to Farmaindustria. On the other hand, R&D global investment in the pharmaceutical sector accounts for 30.000,00 million, only in Europe and these figures hikes to 142.000,00 euros worldwide.

From this amount, 57% goes to design, development and clinical tests evaluation phases. The remaining 40% goes to basic research, approval processes and pharmacovigilance.

According to data provided by Biopharmaceutical representatives, developing a new medicine takes about 12 to 13 years from its discovery to its clinical use in patients. However, only a few molecules reach the commercialization phase. Many are left behind along the phases of the drug development process.

It´s precisely in drug targeting discovery and designs that AI-based techniques have cut downtime by half and costs by 25% in the production of new drugs.

Currently, the Spanish biopharmaceutical, Sylentis, has implemented a software based on Neural Networks, SVM and Machine Learning to gather, filter and reinterpret experimental data generated by the pharmaceutical industry. This allows them to enhance and develop the drugs thanks to a  software that trains to generate thousands of specific compounds to deal with a disease in a matter of a few days. The pharmaceutical company reduces the expensive and time- consuming task of candidate´s selection from years to only a few days.

AI for personalized drugs.

A survey conducted by Deloitte and MIT Sloan Management Review last June found out that only 20 % of biopharmaceutical companies are digitally mature enough, and the lack of a clear vision, leadership and financing are slowing down companies´ growth.

According to MarketsandMarkets, AI´s demand in the biopharmaceutical industry is expected to grow from US$ 198.3 million in 2018 to US$ 3.88 billion in 2025.

The four projected areas to drive most of the AI market forward in biopharmacy between 2018 and 2025: drugs discovery, precision medicine, diagnostic imaging and medical diagnosis and research. The report says drugs´ discovery reached a larger market share during the survey period.

These areas that go from the target candidate molecules selection to the production of the new drug provides a unique opportunity to speed up drugs development. The potential improvement of the process includes:

  • Process redesign to speed up new molecules discovery time and is based on expert knowledge.
  • Digitalization of repetitive processes automation and generation of new content and data.
  • Advanced analytics incorporating internal and external sources. Here new predictive model would be included.

A key role for AI algorithms is molecules interaction forecasting to find the disease mechanisms. In turn, these mechanisms could help setting new biomarkers to identify, design, validate and optimize new drugs candidate target and identify existing drugs that could be reused for other indications

 

data translator

The hidden figure behind a successful AI implementation in the organizations.

The Artificial Intelligence implementation in companies is cross functional: Marketing, Finance, Operations… they all have benefited from the emergence of data driven across business processes in their organizations.

In a recent study published by Fujitsu and Pier Audoin Consultants, shows that the benefits companies have gained through Artificial Intelligence implementation are starting to pay off. This is not a matter of five years´ time. The AI´ s time has come. However, the figures are still low: only 11% of the surveyed companies are implementing AI strategies, 29% have AI projects in progress and 35% expect to do it in the next two years.

Under this classification, they would be defined as innovators, early adopters, followers. In other words, based on the company´s maturity and data adaptation, they will belong to one of the groups before mentioned.

Accordingly, 53% of the companies that have implemented AI or have in mind doing it believe improvement of automation processes depends on it, whereas almost 75% are creating business units for AI´s implementation take-off. The main areas this technology is implemented on is higher production efficiency, maintenance forecasting and above all, in customers´ behavior forecasting for appropriate business actions.

Nevertheless, a survey delivered by MIT Sloan Management Review and Boston Consulting Group a few weeks ago highlighted different data.

Although it claims AI´s rewards promise, these are not risk-free for example, a competitor taking the risk and going a step ahead. These are the innovators that use AI for the company´s across business processes alignment, investment and integration.

Many leading companies see AI not as an opportunity but as a risk strategy.  And this perception has gone up from 37% to 45% from 2017 to 2019 respectively.

Concerning risk management, many AI based initiatives have failed. Seven out of ten of the surveyed companies claimed they have hardly benefited from this technology. And it´s not a trivial matter when almost 90% of companies have invested in AI.

Thus, even if some companies have found out success with AI, most struggle to add value based on it. As a result, many executives face challenges associated with AI: It´s a source of non- exploited opportunities, an inherent risk. But, above all, it´s an urgent issue to tackle. How can executives exploit the opportunities, manage risks and minimize AI associated problems?

Data translator: the hidden figure

Professionals training, not only in technical and scientific areas but also in communication and interpretation, becomes essential for the differential AI value generation. Deep understanding of the business needs and knowing how to convey that to the technical teams in charge of implementing AI is the Holy Grail of all the companies and providers of this service.

On the other hand, Mckinsey says that success results based on AI and data analytics do not depend only on data scientists, data engineers or data analytics teams. A transversal figure is required: a data translator.

Mckinsey believes this figure can ensure the organizations achieve real impact from their analytical initiatives as it can help to understand correctly the business needs and translate them into a scientific -technical language and vice versa.

Data translation experience allows this figure to get deep knowledge of the core business and its value chain in diverse areas: distribution, health, marketing, manufacturing or any other environment.

As the consulting company defines it, in their role, translators help to guarantee deep knowledge generated through sophisticated analytics is translated into impact at every level of the organization. By 2026, The Global McKinsey Institute estimates translators demand will reach two or four million only in the U.S.

Thus, translators take advantage of their insights in AI and analytics to convey these commercial objectives to data professionals who will create the models and the solutions. Finally, the translators ensure the solution produce the insights the company can interpret and execute and ultimately, communicates the benefits of these insights to the businessmen to boost adoption.

One way to reduce the risk strategy companies have taken when they decide to be ahead of their competitors in their sector, is without any doubt, the capacity to interpret the data and offer insights based on them.

 

 

IDENTIDAD-CLIENTE

Artificial Intelligence for Customer’s identity

When customers know what they want, companies must figure out what it is. However, according to Gartner, CMOs only invested 29% of their budgets in new technologies to meet their needs.

Probably marketing specialists do not have the appropriate technology or the technology they have isn´t enough. However, often marketing departments do not have full knowledge of the capabilities of the technologies they have paid for and consequently, do not take advantage of them.

To guarantee a high return of the investment for the business (ROI) the marketing specialists should start by auditing their technological ecosystem in their organizations to finally set the best ROI strategy.

However, the assumption that having a specific technology will guarantee to make money, is a common mistake. According to a survey conducted by the consultant company McKinsey, the technological gap between leading companies and others staying behind is growing. This means companies that make data-driven decisions will make the difference as compared to the rest of companies that are still struggling for basic data analysis and technology.

For both, innovating companies and the ones lagging behind, data analytics emerges as an opportunity to offer better insights, in other words, having a healthy data culture. Some will implement it and others will not. Therefore, some will obtain a higher ROI at the expense of companies that will lag behind as they will not implement a data-driven culture.

Once a healthy data-driven culture and appropriate technology are implemented, the time to listen to customers has come. As said before, the more consumer’s attention is captured, the more customer’s personalization will grow. The marketing departments invest around 14% of their budget to achieve the so long waited for personalization, very little in return for what it offers.

AI-based Customer identity

The need to know customers opens the adoption and use of AI not only at marketing scale but also at a broader and general level in the organization. The survey conducted by McKinsey Global Institute containing more than 3000 companies called “Artificial Intelligence: The next digital frontier?” shows that the first AI users are closer to the digital frontier. It´s precisely at this frontier where the leading companies in each sector are found. AI is present in all the groups, at the core business of the value chain, it is used to boost revenues, reduce costs, and has total executive management support. The companies that have not adopted AI technology at a certain scale or at the core business, are not sure of the revenues they could expect with such an investment.

Innovative companies know that existing and future customer personalization is achieved by customer’s knowledge. Here is where AI is introduced completely, and marketing specialists are trained to identify each one of them and personalize their actions.

One of the key elements of the customer’s identity techniques is that it allows us to characterize actionable concepts. These actionable concepts are the result of incorporating the business expert knowledge to the AI, also called Intelligent Observation Systems. The result is a set of concepts providing a broader customers perspective.

Therefore, customer’s identity techniques enable companies to obtain, for instance, better insights into consumer’s habits. Once they are identified, the different areas of the organization have enhanced the ability to make personalized offers leading to a greater benefit while delivering greater levels of customer satisfaction. For example, churn may be avoided once the customer’s satisfaction is analyzed. Additionally, price predictive models for certain products can be set.

Innovative companies are the ones able to respond to the classic questions: How do my customers behave? What value of information can I obtain from their habits? How can I improve the companies´ revenues? Partly, it´s due to their capacity to be always at the state of the art.

 

adaptive-machine-learning

Adaptive Machine learning or how to analyze environment´s volatility

The environment is constantly changing. Although we may think the environment is immutable, that the data collection on people and the environment hardly changes, there is nothing further from the truth. Data changes so rapidly that predictions based on Machine learning techniques become soon obsolete. Why?

According to Gartner´s latest trends curve, one of the technologies that will flourish in the next years will be the Adaptive machine learning, also known as Adaptive automatic learning. This technology allows for continuous learning with online machine learning models and in real-time. This capacity allows machine learning models to adapt themselves to the constantly changing world.

Due to this high adaptation capacity, this technology is particularly useful for autonomous cars training. These cars must be able to incorporate new data in real-time, analyze it and make decisions based on this data.

However, the use of this technology goes beyond autonomous cars (which Gartner gives over 10 year-period+ to see one). Adaptive autonomous learning in real-time requires efficient reinforcement learning, in other words, how an algorithm must continually interact with its environment to maximize its reward. These algorithms would work for agriculture, online marketing, smart cities, and even financial institutions or any other industry using IoT.

In these changing environments is not possible to collect all the generated data, organize and quantify it and train a “traditional “machine learning model as it must be retrained and make decisions in real-time.

The idea that emerged in the 50s

Although it may sound very innovative, the truth is back in the 50s B.F. Skinner developed a training machine focused on skills development. The idea was to monitor students´ progress in programmed learning. The machine adapted the questions based on the students´ previous correct answers and provided them the capacity to learn at their own pace.

It was not until 1970 that this idea was applied to artificial intelligence. However, as happened with many other things back then, the development of the adaptive automatic learning was slowed down due to computing capacity, capture and data storing. Simply, they were not ready yet.

Despite all this, Adaptive automatic learning did not die there. In the last decades, these technologies became more agile, scalable and easier to use although not widely used.

Nowadays, there are a myriad of fields for using adaptive machine learning, besides autonomous cars or robots. In the financial field, this technology allows for online data incorporation, for example, to monitor the stock market. It is also being used for location prediction (very useful for autonomous cars), advertising and in insurance field.

According to Gartner, adaptive machine learning is still a challenge as systems based on self-learning and autonomy will require considering aspects such as privacy, ethical aspects, and security. On the other hand, companies will apply continuous learning for autonomous and smart decision-making.

This doesn´t mean current methodologies based on machine learning must be changed as once the off-line algorithms are trained, the adaptive learning allows to improve, contextualize and personalize even more existing machine learning models.

marketing-comportamental

Behavioral Marketing, towards product´s customization

Today we are used to receiving products and services recommendations from certain companies and we usually think these recommendations are made specially for of us. We believe products ads sent to us are personalized, targeted to each person, however, it isn´t quite true.

For most people, personalization and recommendation are the same thing. When a company adapts its services to meet customers´ needs, it can use recommendations to personalize their purchases, although these words may sound like synonyms, they aren´t.

The technological firm´s first approach to what constitutes a personalization engine is defined in its Gartner´s guide for Digital Personalization Engines in 2015 as:

A process that creates a relevant and individualized interaction between two parties designed to improve the customer´s experience. It uses information based on the recipient´s personal and behavioral data as well as behavioral data of similar people to offer an experience to meet specific needs and preferences.

Therefore, personalization engines are solution providers for the personalization process. They determine and provide the customer experience to deliver personalized messages through digital channels (emails, text, automatic notification) and provide analysis and reports to the business user.

Whereas product or content recommendations, involve a small selection of items in the catalogue sent to the user in a variety of ways (banners, messages, recommendation tables), in other words, a recommendation is more like a presentation of items.

However, a recommendation is a form of personalization, but a personalization is not a form of recommendation. Let´s take YouTube for example, Google video platform can suggest related videos based on users previous viewing habits. A recommendation would also be based on what other YouTubers watched. Hence, Facebook personalizes the news feed of your profile based on the activity of your friends but, does not recommend one or the other. This is personalization, as it is based on the individual´s specific habits rather than a general algorithm. The more you know about a person, the better, not only about videos habits. In other words, a recommendation is usually based on elements whereas a personalization is based on individuals.

Therefore, how can we talk about personalized recommendation systems when one refers to a product and the other to a person? Simple: incorporating the behavioral habits of the person in the recommendation engine. The more you know about a customer, the more personalized the products recommendation will be.

Today, there are only a few companies that can really personalize their behavioral marketing strategies as it is necessary to implement not only an algorithm for the recommendation, but also to incorporate all the customers and non-customers data. This is the challenge. Let´s give an example to explain how to overcome this.

 

Behavioral Marketing in banking

We know how the banking sector is moving ahead at a staggering speed to meet customers´ needs. Gone are the days when financial products ads were targeted to everybody regardless of their personal conditions causing many bad headaches.

Today, the sales and marketing departments of banks need to know better their customers to be able to offer them what they really need at the right time. Banking institutions are using a strategy consisting of incorporating the user´s browsing data.

Behavioral Marketing involves the sales, commercial and ads activities based on the customer´s behavior patterns obtained through cookies, social networks, web analytics, browsing and purchasing history and IPs. In this case, cookies collect anonymous information of browsing habits aiming at targeting ads according to their interests.

Therefore, Behavioral Marketing allows for ads customization based users profile and avoid sending ads to customers which will end up as spammers.

Consequently, when a customer accesses a bank web page it leaves a trace like a “bread trail” showing the yellow brick road of what, how long and what part of the web it has seen, whether it has taken any action like checking its credit. At the same time, if we incorporate all the available information of this customer (browsing activity, purchasing history, IP address, social networks, app data, etc.) we get knowledge of its propensity to certain banking products.

Keeping in mind the user identity will allow for customers characterization and their relationship with the institution through a series of variables that will describe more precisely each user. Using browsing habits as part of the user’s characterization will allow for a broader vision and therefore improve personalization of products recommendation.

Deep Learning and Machine Learning techniques allow to incorporate user´s data which could enhance the personalization model and would allow us to know the propensity of a specific person to buy a certain product. In other words, better conversion rates of their sales departments in their products offers.

Gartner’s hype cycle has change

Like every year, Gartner, the technology firm has released the technology trends for the coming years. CTOs, CIOs, and CEOs worldwide are evaluating these technologies to figure out the technological efforts they should be focused on and how to strategically implement them.

It 1995, Gartner started to publish the top emergent technology trends and since then, its predictions have gained a reputation among the tech community. Since 1995, this model is seen as the new technology adoption guide. The graph is divided according to maturity, in other words, is it an emergent technology or a technological trigger, is there excessive enthusiasm or overestimation, disappointment and gradual adoption of such technology.

Curva gartner

Those who lived those days would remember this assistant was not very successful. It was a nuisance rather than a help. Finally, at Redmon´s decided to eliminate it. Today, almost 25 years later, these virtual assistants are living a second youth thanks to the Chatbots.

And why this? Because researchers, startups and leading technology companies continue to develop this type of technology years after and turn an extravagant idea into something essential for the society.

Curva-gartner-1995

Primera curva de Gartner 1995

Gartner’s Hype cycle today

This year, the trends curve highlights emergent technologies with a strong impact on the businesses, society and people for the next 10 years. In 2019, the list includes a technology offer ranging from low latency global internet, a virtual map of the real world and imitation of human creativity.

However, this hype has been trending down with “fewer things” on its right side lately, which means Gartner intends to focus more on technologies that can really make it. This is due to a negative opinion of Gartner’s curve during recent years as very few technology trends on the list have been able to overcome the theoretical phase.

As a matter of fact, looking at this curve, there isn´t any blockchain, artificial intelligence, digital twins, deep learning, augmented reality this year as these technologies have been widely implemented. However, analyzing the firm´s lists of recent years, we´ll see that some technologies which had a two-year development period have taken over a decade to start being a reality. For example, in 1995, voice recognition was allegedly at its productivity phase, however, it is now that it has its “place in the world” thanks to the development of deep learning. Two decades later.

Thus, we might think Gartner´s predictions are not very good. None of its 24 curves of emergent technologies included virtualization; NoSQL; open-source, Map Reduce or Hadoop. This is an overestimation of emerging technologies or simply that they do not follow Gartner´s curve rules.

marketing ia

The top ten AI uses in Marketing [Infographics].

Nowadays Artificial Intelligence based applications in marketing and sales enable companies to know their customers better and to offer them the best products promotions in real time. Chief Marketing Officers-CMO and their teams need automatic learning and artificial intelligence to stand out and take advantage over their competitors.

In pursuit of customer satisfaction, the best CMOs manage to balance their marketing strategies and elements that make the company brand and experience unique.

Knowing how potential buyers make up their minds on how, when and where to buy, turn marketing strategies more interesting. Advanced analytics enables customers segmentation for better knowledge of their preferences. Thanks to this knowledge, products purchasing propensity or churn prevention in the purchasing process or suitable pricing setting can be estimated, among many other things.

According to a recent survey delivered by Forbes Insights and Quantcast Research, the use of AI allows marketing and sales departments to boost sales by 52% and increase customer retention by 49%.

The infographics shows data of the ten most relevant Artificial Intelligence contributions to marketing teams. In the next two years, according to the reports, the implementation of Artificial Intelligence based technologies and automatic learning will be adopted by the companies that realize their benefits.

 

AI-marketing

concierto-musica

Math and music: Advantages of using artificial intelligence

We are used to getting Spotify´s music selection and classification based on what we have listened to and our music taste. Thus, the Swedish company must upload over 20000 new songs or podcasts each day and thanks to artificial intelligence´s help. It provides the music we have listened to the most for some time and its function: “your summer memories” and it creates different music groups depending on what we have listened to lately.

With this help, music gender classification has become obsolete as music lists generation by artificial intelligence do not depend on music gender but on “the good music”. It´s obvious not everybody likes the same music gender, however, specific mathematical patterns which are transversal to music gender do work. Consequently, there are people who like Pop music and still enjoy a song classified as Rock music.

There are different fields in which artificial intelligence can improve music processes. In the 50s, Alan Turing was the first to record computer-generated music. This was the start of an interesting area in which AI-created music through reinforced learning. The algorithm learned what characteristics and patterns created a specific music gender and finally composed.

Thanks to this application, artificial intelligence helps companies to create new music or assist composers in their creations.

Another field in which artificial intelligence has great acceptance is editing. The experience of listening to music with a clear and clean sound is, without any doubt, one of the main features music lovers appreciate the most. Although the creative component is still necessary, AI can train to edit their audios properly to those not having that creative skill

When we talk about applied mathematics to music, we must understand what is involved in diverse areas such as tuning, musical notes, chords, harmonies, rhythm, beat, and nomenclature.

The beginning

In 2002, Polyphonic HMI was founded on the premises of using artificial intelligence and apply it to the music industry. Based on the study of the mathematical components of music, the probabilities of success of a song could be determined. Although an artist´s success depends on many factors, this system helped to simplify the task of finding which song could serve as a launch single of a new album and even of a new artist. Thanks to this, record companies, producers and representatives could allocate resources in a more favorable context. Music commercialization has always been an expensive business and a big challenge to finding promising artists and successful songs.

Fifteen years later, the leading technology companies are investing in this technology focusing on different processes of the music industry. Thanks to mathematics, we can see the impact of artificial intelligence on the music we listen to which enrich our musical experience.

 

ordenador-cuantico

Learn about the state of Quantum computing

In the last quarter of 2018, The E.U. launched the Quantum Flagship megaproject with more than 1000 million euros in funding for a 10 – year period and more than 5000 researchers committed to the development of quantum technologies and to bring their capabilities into the market. Europe has finally made it to the race started by China and the U.S. Both countries want to dominate, or at least, to lead the so- called quantum race. The European Union finally rode on the most powerful research and development program of recent years.

This Quantum Flagship will build an European program network based on quantum technologies which will boost an ecosystem to provide the necessary knowledge, technologies and infrastructure for the development of this industry. The research areas are focused on quantum communication (QComm), quantum computing (QComp), quantum simulation (QSim), meteorology and quantum detection (QMS) and basic science (BSci).

Big technological companies doing research on quantum computing are immersed in a commercial race to see which will be the first to run a quantum computer. However, which are the main features of quantum computing and how can we benefit from it?

Current computers, either portable or big computers, are based on basic binary circuits, in other words, a “yes/no” response. Thus, programmers can create tasks to make that computer work with these sentences “if this/then that”. However, sometimes a computer with these characteristics cannot solve efficiently certain problems. For example, in many mathematical optimization problems, current computers take their time to evaluate individually every possible solution until optimal is found.

In contrast, quantum computers have a completely different concept as they do not use the “yes/no”. binary logic. By their nature, their basic circuits can respond to “yes/no/both (in the same proportion)”. With a quantum system, a developer can implement instructions “if this, then that/not-that/both” and here is what makes the difference. They can explore a great volume of data at the same time, offer very efficient solutions to very complex problems such as transport routes optimization.

Until now, quantum computing has not been developed yet, at least the way we are used to (a computer executing tasks), because one of the problems facing quantum computing is building multipurpose quantum computers. Compared to a normal computer, a quantum machine is extremely complex. The first models available, like the IBM´s recent version, are based on superconductive constituents (including Josephson effect devices), which need to work at a temperature below -273 ºC (almost absolute cero) The necessary cryogenics technology of the components to be able to read and handle steadily these qubits are extremely expensive and complex.

Despite the inherent technical difficulties to quantum-mechanic systems, extensive research has been carried out and confirmed promising applications of quantum computing as soon as the necessary hardware is ready. Artificial Intelligence is among them.

fraude

Supervised and unsupervised learning techniques for fraud detection.

Artificial intelligence is redefining fraud prevention techniques as it allows to obtain information based on experience. This information consists of transaction activities, behavior and trends. Before the use of artificial intelligence, the applied methods were based on rules that helped to analyze historical fraud patterns but could not prevent them. Although these models could identify fraud attempts, they did not provide information of the future.

Nevertheless, the technological sophistication of fraud crimes is higher and more precise and efficient attacks have grown during the last few years

As a result, leading companies dealing with potential fraud crimes, mainly in banking and insurance, must increase monitoring accuracy and acuity of customers potential risk for the institution. Decision-making on accepting or rejecting payments, limiting charges refund and reducing operational and reputational risks is much easier now.

Fraud prevention in the future will depend on a combination of supervised and unsupervised automatic learning techniques. Supervised automatic learning finds patters based on historical events, factors, trends, etc., and unsupervised automatic learning looks for relationships and variable links, a combination of both methodologies would help to prevent fraud in the following:

Detection in real time. The use of artificial intelligence enables the detection of attacks in real time, instead of weeks that usually takes to start receiving reverse requests of charges.

Thwarting the most sophisticated attempts of fraud. Fraud techniques get more and more sophisticated. Artificial intelligence would help to prevent and reduce these attacks.

Scoring in real time. Provide analysts a scoring for a better perspective to set the limits to maximize sales and minimize losses in real time.

Immediate transactions. Fraud prevention systems based on AI enable immediate transaction´s approval provided it is within the reverse charges threshold of the main debit and credit cards.

Reduction of false positives. False positives are reduced thanks to supervised and unsupervised automatic learning, whereas current techniques cannot efficiently detect them. Frequently, when a customer pays an unusual amount of money or from a new location, the card is blocked by the system as it interprets it wrongly as a suspicious activity. With artificial intelligence, it is possible to identify more precisely any change in customers expenditure habits.

Profitability in low margin products. AI has allowed insurance companies to continue their profitable business and attract new customers whose historical purchases are not part of the historical supervised learning of fraud systems

Supervised and unsupervised learning should be complemented with experts´ knowledge aiming at a mixed approach to focus their attention in more suspicious cases detected by AI.