Translated by
2019/10/04 09:49:32

Investments into the AI companies

2019: 40% of the AI companies lie to investors

At the end of September, 2019 Wall Street Journal published results of a research according to which 40% from 2830 European startups declaring as if they use AI lied to investors. Actually all tasks instead of computers were carried out by people.

If in 2013 AI used only one of 50 new startups, then by 2019 AI services are offered already by every twelfth company. At the same time European the startup ecosystem becomes more and more mature: investments increase, production grows. As a result also the competition for partners and clients grows.

Wall Street Journal published results of a research according to which 40% from 2830 European startups declaring as if they use AI lied to investors
Wall Street Journal published results of a research according to which 40% from 2830 European startups declaring as if they use AI lied to investors

It turned out that for heads the probability to come across the nenastoshchy AI company even above, than false declarations about sustainable development of a startup. Considering promptly growing estimates of the AI companies, all clients before signing of the contract are recommended to carry out complex expertize of AI. Specialists also note that some companies unintentionally mislead the partners and clients. If the head did not graduate at the field of computer sciences, then he cannot be realistic about work of AI tools of the company, so, he should take a word to the employees.

Therefore to heads of any companies — as those that work with AI, and those who look for partner AI — experts give the same advice: be convinced that you deal with true AI technologies. The cross-functional team should control work of AI within all enterprise during all lifecycle of the tool.

All actions of AI should be in general explained and interpreted. AI technologies should be supplied with tools for detection of systematic errors in their activity, and the technology department should ensure reliability and safety of AI, monitoring obsolescence of data or models.[1]

Notes