We used to spend a ton of money for our data processing capacity while maintaining and updating both the hardware and the software on our computers.
Now we buy anonymous, faceless capacity from the cloud with bargain-basement prices, on demand and without shedding a single tear on what is the actual infrastructure behind the latest-of-the-batch Linux environment that we are running.
The invention of databases gave rise to companies like Oracle, which rode the seemingly endless SQL wave of corporations big and small, raking in the cash that helped to build up Larry Ellison’s private plane collection, leaving him with a pocket change of tens of billions of dollars on top.
Now all that has been moved in the cloud as well, and we pay according to our usage. We can use SQL or NoSQL databases, but do not need to pay hundreds of thousands of dollars for the licenses, however beefy our Big Data application is. The software infrastructure is included in the usage price.
The next to be commoditized is Artificial Intelligence: several open source kits are already available for making sense of all those petabytes of data via Machine Learning. Expert systems that help us squeezing the very last ounce of business value from our data lakes are rapidly moving into mainstream, becoming the new “leading edge” for corporate IT.
On the other end of the scale, speech-based interaction with intelligent data back-ends is used by millions of ordinary users daily, unwittingly creating another enormous data set to be exploited by the companies giving out these services “for free”.
While the commoditization of processing power and data can be seen as being just another natural evolution of daily operating practices, the incoming explosion of AI sprouts completely new dimensions into our computerized lives: accountability and ethics.
When an autonomous vehicle accidentally kills a pedestrian, who will take the responsibility?
Or when a DeepFake video that smears the opponent and is spread over Facebook to hundreds of millions of carefully pre-screened recipients causes a narrow win for a scrupulous candidate who has no problems in spreading such lies in order to win at any cost, should the result be invalidated due to the apparent ethics violation? And was it Facebook’s or the candidate’s fault?
Being in tech has been pure joy for us riding the wave of constant generational upgrades on our working environment. We have only seen improvements being piled up, with ever-lower costs and reduced maintenance needs. We have been able to concentrate on “the good stuff”, like making sense of the Dark Data that is around us – the leading principle behind Datumize.
But with the expansion of AI from trading floors using it to make millisecond trading decisions to everyday use in autonomous vehicles and even to self-determining target mapping on a battlefield, we are no longer just talking about “better technology”. The society as a whole will be impacted, and the consequences can be much more far-reaching than just a hundred-fold improvement in the cost of computing.
Our societies appear to be uncomfortably unprepared for these new dimensions of IT.
AI is still quite expensive to deploy in large scale, but it has hit the early exponential increase of the traditional hockey stick curve that has been witnessed in so many areas, from vacuum tube radios in the early 20th century to the mobile phone boom of the late 1990s. With the expansion of Internet of Things, all that humongous amount of real-time data can be used to either benefit or coerce us users.
We should be aware of this ongoing transformation, and unfortunately those who decide our laws are mostly blissfully unaware of even the earlier transformations in the IT space: they appear to struggle with much simpler issues like net neutrality or data security.
Whether the future of AI is mega-Orwellian or a perfect world of user-friendly artificial rainbows and unicorns depends on us.