Go directly to the content of the page Go to main navigation Go to research

VIPs, researchers, and business and opinion leaders share their take on news items or key issues about digital transformation or energy transition.

There are a number of factors standing in the way of the deployment of large-scale artificial intelligence projects, including a lack of maturity in multi-cloud architecture, difficulty in putting AI solutions into production, and apprehension about not being able to ensure compliance with data management rules or data security.

We’re all hearing about decision support, predictive analytics, data visualisation and trend analysis, reporting, smart search engines, and so on. All such tasks can be performed by applications, provided they have the capacity to exploit high-quality data on an industrial scale. Data warehouses, data lakes, HPC, Business Intelligence, big data and AI have been around for years, but what’s new is the vast amount of data being generated by the internet and IoT, the capability to consolidate and visualise data in real time, and the shift towards putting AI projects into production. If businesses don’t run with these technological developments, they will struggle to be innovative.

Structured and unstructured data, why consolidate it?

While business data stored in databases has been centralised in data warehouses for around 20 years, internet and IoT have made this centralisation more complex due to the exponential growth in unstructured data.

The game has changed now, not only as a result of the amount and diversity of data but also of the real-time nature of these operations. Unstructured data, whether it is produced by the internet (logs, photos, videos, sound, databases, connected objects), IoT (from moving or static single objects) or Edge Computing (from manufacturing sites for example), is stored in data lakes. Once processed by BI or big data tools, it delivers information that can be used to carry out predictive analytics, draw behavioural lessons, reveal trends, etc.

The difference between past and present data collection and consolidation lies in the speed of execution. Whereas these operations used to be undertaken on a quarterly or monthly basis, they are now performed in near real time. This responsiveness is revolutionising applications, bringing multiple benefits. In practice, it might mean being able to instantly visualise a cash withdrawal from a bank account made from any cashpoint, or a company being alerted in real time about malfunctions on a production line.

No artificial intelligence without qualitative data

In order to deliver relevant, understandable, value-added information, data must be high-quality, contextualised, and enhanced. By cross-checking and exploring all kinds of data, big data and BI tools reveal cause-effect relationships that are not immediately evident and that highlight links between separate events.

In retail, for example, by analysing purchase history, these tools help interpret consumer habits (time of purchase, value, product type, purchase channel, etc.), identify trends from this and make purchasing predictions.

Since scientific research requires complex calculations and processing of big data in real time, businesses use HPC or High Performance Computing. This is the case in the health sector with genome research, or in the pharmaceutical industry with drug manufacturing.

So the real challenge for companies today lies in their capacity to integrate and deploy these tools – without which they cannot obtain intelligent, intelligible information in real time.

How AI helps make predictions

Businesses have now recognised how crucial AI is in the automation of a number of processes and in the production of relevant information. However, they often use the technology sparingly through a proof of concept (POC) approach. A wide range of projects has certainly been launched – including predictions of failures in production lines, impulse buying recommendations, and enhanced requests via search engines – but they are limited in scope. Indeed, not all businesses yet appreciate the full importance of moving to a stage where AI can be rolled out on an industrial scale, enabling major, truly innovative projects to be implemented for business lines.

Most French senior executives today believe that they need to focus on AI in order to achieve their growth objectives, but are fearful of scaling up. This concern stems from a difficulty in fully understanding widespread AI deployment on multi-cloud IT architecture. As long as a POC-type project is limited to a use case with reduced scope, AI deployment is easy and risk-free. However, as soon as projects start using thousands of data and multiple features scattered across various clouds, they have to take into account legislative, regulatory, traceability, and security aspects. They have to be incorporated into a secure, transparent, and auditable production process. As a result, most businesses remain at the POC stage, for fear of not complying with all of these requirements.

To help them put their analytics processes (data lakes, data warehouses, BI, big data, HPC, and AI) into production and control their data, I advise them to use technological building blocks that are not linked to public cloud offerings and to integrate them into a DevOps/CloudOps process to speed up deployment.

15/10/2020

Yves Pellemans, CTO Axians Cloud Builder

Find out more