top of page
pexels-thisisengineering-3861969.jpg
  • Facebook
  • Twitter
  • LinkedIn
  • Instagram

The 7 Biggest AI Trends for 2023

We are living in a unique time where we can reach to AI expert in real time. You may feel surprised by my statement but it’s truly my case and it can be yours too. Before sharing the seven biggest AI Trends for 2023, I had the pleasure to meet ChatGPT, the OpenAI magic creation, to know its AI trend prediction for 2023. Let’s discover its answer that was shared by Forbes : “Explainability and transparency: There is growing interest in ensuring that AI systems are transparent and explainable, particularly when they are being used in decision-making processes that affect people's lives. This could lead to the development of new techniques and technologies for explaining the decisions made by AI systems, as well as the creation of more transparent and interpretable models”. For those who tested ChatGPT, you will notice the relevance of some answers but also the lack of knowledge in some other cases. ChatGPT was mentioning the area of Responsible AI that I will clarify later among the seven AI Trends. Let’s dive on these Trends for 2023:

 

 “AI is in a golden age and solving problems that were once in the realm of science fiction” Jeff Bezos

 

  1. Composite AI

Composite AI, named also Multidisciplinary AI, is defined as the combined application of different AI techniques to achieve the target of better productivity, efficiency and business growth. For instance, it can incorporate machine learning, natural language processing, knowledge graphs, reinforcement learning, rules based approach and more. This AI multidisciplinary techniques will be used whether in sequence or isolation to achieve the best analytical outcome. Composite AI will improve AI systems learning efficiency, level of ‘common sense” and ultimately will solve a wide range of business pain points. We often have the question looking to identify the AI technique (AI algorithm) that will solve the business problem. In this new AI area well defined by Gartner in the annual “hype cycle for Artificial intelligence” in Summer 2020, we should as usually focus primary on understanding the business problem. This is key to define the needed AI techniques that will generate knowledge and solve the business pain point. Let me share few examples from various sectors and industries:

  • Composite AI enables retail industry to rapidly answer to the changing demands for particular products, analyzing in real-time constant flow of data coming from multiple points of sales and proposing best actions (Next Best Offer and Next Best Action).

 

  • For insurance sector, the target is proposing the best coverage plan for new client. For that, using knowledge graph technique will gather relevant information about the potential new client such as insurance history, interests, activities, …. By using a rules based approach, we do analyze customer and identify the best interesting criteria that can be considered for the contract. NLP technique can be used to initiate an automatic interaction with the potential client and can suggest the best advisor who can engage with the future customer in person to personalize the offer. In such example, we use more than 4 AI techniques to build the intelligent solution and embed insights along the way.

 

  • The banking industry is exploiting Multidisciplinary AI in multiple business cases like compliance, money landry, trading, M&A, etc. As a great achievement of such combined AI techniques, we used computer vision, NLP, machine learning and knowledge graphs for contextual analysis and entity labelling to propose the most personalized customer journey.

 

  • In healthcare industry, composite AI is used thanks to computer vision to analyze X-ray images and to NLP to interpret test results. We use knowledge graph to evaluate the relevance of treatments. We use chatbot and contextual research engine to better engage with patients and provide the best support.

 

​   2. Edge AI

 

According to Gartner,“By 2025, more than 50 percent of enterprise-managed data will be created and processed outside the data center or cloud”. As defined by Gartner in 2022 Hype scale “Edge AI refers to the use of AI techniques embedded in Internet of Things (IoT) endpoints, gateways and edge servers, in applications ranging from autonomous vehicles to streaming analytics”. To clarify the understanding, Edge AI is a combination of Edge Computing and Artificial Intelligence. Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location of the device. Artificial intelligence techniques process the data generated on the device and provide real-time insights. Edge AI is a system using AI algorithms to analyze data produced by hardware device at the local level. Since the internet has global reach, the edge of the network is expanded to any location : hospitals, factories, stores and devices almost everywhere. Edge AI is an accelerator to embed intelligence in devices and offer a better service for clients. To get started on Edge AI, five steps can be considered:

  • Identify the right use case. In the manufacturing industry, predictive maintenance will prevent from machine malfunctions. In the retail, we can develop a smart virtual assistant.

  • Evaluate your data and application requirements.

  • Understand Edge infrastructure requirements. AI inference infrastructure must be
    performant, efficient, and responsive

  • Rollout your Edge AI Solution, POCs can take between 3 to 12 months for Edge AI Solutions.

  • Celebrate your Success since highlighting such great experiences will attract more sponsoring, funding and business interests.

 

    3. Generative AI

Generative AI is a disruptive technology referring to a set of algorithms named Generative Adversarial Networks (GANs). Generative AI is a set of AI models that can generate a variety of content types such as text, images, voices, videos, products, piece of arts and even business models. It is used to solve business problems and provide a better respect of privacy. In a recent Media interview for “Journal du Net”, i explained the use of Generative AI in many applications from predictive maintenance to generate synthetic data. As explained, from experience with our clients and partners, I can say that “Generative AI is the creation’s tool that makes machines express their imaginations”. Generative AI is applied to multiple use cases such as language traduction by using neural networks to generate a more accurate traduction compared to deeplearning. In healthcare industry, it’s used to create medical products but also organic molecules. Gartner predict that “By 2025 more than 30% of new medecines and materials will be discovered by using Generative AI”. As a concrete example, the biotechnology company named « Insilico Medecine » announced in 2022 that Generative AI techniques contribute to discover a new medicine for Fibrosis prevention. Harvard Business review article mentioned many applications of Generative AI such as in marketing, generating codes, intelligent conversation, etc. to learn more about the early days of Generative AI and the biggest actors in this Area from OpenAI to Meta, VentureBeat shared many articles.  

 

    4.Adaptive AI 

Adaptive AI allows for model behavior change post-deployment by using real-time feedback, to continuously retrain models and learn within runtime and development environments. It’s based on new data and adjusted goals, to adapt quickly to changing real-world circumstances. In this field, we generally talk about reinforcement learning. Adaptive AI can change its own code to incorporate what it has learned from its experiences with new data. Gartner expects that by 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the number and time it takes to operationalize artificial intelligence models by at least 25%.

 

     5. Responsible AI 

Developing a fair, trustworthy and ethical AI product or solution is not a simple task. IT’s not only the responsibility of datascientists and developers. Every company should set an AI Ethic framework which is a system of moral principles and techniques that are intended to develop the responsible use of AI. It should prevent discrimination and ensure transparency. Responsible AI is an area from AI governance that focus on ethics and democratization. To setup a responsible AI framework we should consider the four principles : Comprehensiveness, explainable, Ethical and efficient. Comprehensiveness called comprehensive AI has clearly defined testing and governance criteria to prevent machine learning from being hacked easily. Explainable AI is programmed to describe its purpose, rationale and decision-making process in a way that can be understood by end user. Ethical AI initiatives have processes in place to seek out and eliminate bias in machine learning models. Efficient AI is able to run continually and respond quickly to changes in the operational environment. The world economic forum is supporting industries and societies by setting operational principles within a solid Ethical frameworks and we need regulation to make sure that all companies are building responsible AI outcomes.

 

“Explainability and transparency: There is growing interest in ensuring that AI systems are transparent and explainable, particularly when they are being used in decision-making processes that affect people's lives” ChatGPT;

 

     6. Low-code or No-code AI

Since organizations are in a rapid space of changes, digital transformation should not remain on hands of IT and digital skilled people. To make it possible, low code No code AI platforms were created to break barriers to those who have no time or willing to acquire technical skills in coding. We are talking about the new role named “citizen datascientist”. Low-code, No code AI platforms allow teams to develop AI Models without coding. That means processes can be developed at low cost, implemented quickly and used easily. Artificial intelligence should be accessible to everyone to solve business or societal problems without having technical skills to code. These Low code AI platforms are offering tools to democratize AI and put it in hands of users from business lines. People asked me often the question about Python and if it’s a low code or not. Compared to C++, python is a low code and datascientists are exploiting a full range of libraries to accelerate coding tasks. Gartner predicts that by 2024, three-quarters of large enterprises will be using at least four low-code tools, and low-code technology is likely to be responsible for upwards of 65% of total application development.

As a conclusion, No code AI solutions reduce entry barriers for individuals and businesses to start experimenting with Artificial Intelligence. These solutions help businesses adopt AI models quickly and at a low cost, enabling their domain experts to benefit from the latest technology. It helps datascientists focus on hard problems. Many low code platforms exist and some of them are open source. Some others are used as accelerators in the context of data modernization such as IDEA by Capgemini. To know more about the coming decade of Low code No code, I invite you reading the article titled “What the next 10 years of low-code/no-code could bring” by VentureBeat.

 

    7. Quantum AI

It is the use of Quantum computing for computation of machine learning algorithms to achieve results which is not possible on a traditional computer. With Quantum computing, the obstacles to achieve artificial general intelligence (AGI) are reduced. AI that is powered by quantum computing can complete years of analysis in a short period. At Capgemini, we have many achievements in this space such as applying quantum to machine learning for quality assessment in the context of BMW Group quantum computing challenge. In banking, Quantum AI is developed on risk simulation for financial services. In automotive and aerospace, it involves solving differential equations and machine learning for structural analysis research. In multiple sectors, Quantum AI is exploited in cybersecurity and optimizations problems and it can be used with small data also and not only in big data contexts. Capgemini created a Quantum Lab and is supporting partners to build such impressive technological capacities such as in IT security with the German Federal Office. Despite the hype around quantum computing and AI, a lot of researches are in progress by teams like Google Quantum AI with rooms for multiple difficult problems such us Resolving High-Energy Impacts on Quantum Processors or Formation of Robust Bound States of Interacting Photons.

 

“it is important to be realistic about what it can and cannot do, and to be aware of the limitations and potential risks associated with its use” ChatGPT.

 

Among more than 20 AI trends for the coming three years 2023-2025, i shared the 7 biggest AI trends for 2023 that will accelerate digital transformations in large organizations. As said by Jeff Bezos : “We are now solving problems with machine learning and artificial intelligence that were … in the realm of science fiction for the last several decades. And natural language understanding, machine vision problems, it really is an amazing renaissance”. You will hear about more AI trends such as : “Democratize AI”(one of the fifth major AI trends shared by Gartner for 2023); AI for Metaverse, AI Weapons, hyperautomation, AI for cybersecurity. Artificial intelligence is an exponential technology and as said by ChatGPT “it is important to be realistic about what it can and cannot do, and to be aware of the limitations and potential risks associated with its use”.

                                                                                          

                

Dr. Lobna Karoui              

Head of Data & AI Center                      of Excellence at Capgemini                                                            

bottom of page