Thinking, thinking, thinking, always thinking

Intro

A couple of days ago I read the following article on AI and jobs . The author, Mr. Kai-Fu Lee goes back a bit telling his tale, a near death experience, continued with regret and then resolving it into a vision for the current and upcoming age of AI. In the article he argued that in order for us to co-exist (take not he didn’t say to win) with AI is for us to create avenues in life , particularly jobs that takes advantage and relies on our humanity. The specific quality which he propose to be a solution to the job-market-wrecking-AI problem is love. Mr. Lee did mention other qualities which we often think are intrinsically tied to what makes us humans, but the main thing is love. Mr. Lee’s article then made me thinking about values and AI, and I’ve thought of several things. So let’s continue and read them below!

The value of data

You may have heard by now the wise words “Knowledge is power” (to which I agree). And you may have heard “Data is the new oil”, that is data is the basis of our current age, just like oil was in the 18th century. But of course humanity has amassed knowledge and data for so many years, past decades and centuries, though they do say most of our amassed data has been created in the past few years. You get the idea though, we’ve been gathering data for many years now and now with the advancement of technology the data we have stacked up and gathered that have been left to dust previously are being picked up and consumed. We feed our computers, servers, models with data and create products and services out of them. Where previously data was just lying there seemingly dead without much use (aside for a select group of entities, like Google which really have the resources and expertise), now our world is inundated with data and so called data-driven decision. Through technology we have managed to extract and find value in data.

Extending the value of data with AI

And then there’s AI, which like Mr. Lee said, really is another piece of technology that consumes and tries to get value out of our data. But it not only extracts value out of data, AI also extrapolates value out of our data. It creates models, decisions, predictions, generalizations out of our data and use them on complex subjects it may have never seen and which we like to solve or get better at. AI enables things like self-driving cars, better more relevant ads, AlphaGo possible. To add to that, with current advances in technology and research AI has really taken off not just in complexity but also speed and availability. AI has taken the value of data a step further by enabling the ability to create services and products that really wouldn’t have been able to take off without it (think self-driving cars). Though at the same time it has made several of our jobs redundant.

Extending the value of AI?

Now that AI has given us so much value, where do we go from here on? How do we keep ourselves relevant in a world where technology and AI has been able to eke out value out of data, the resource that is believed to be most crucial in our age, better (arguably) and faster (definitely) than we ourselves can? Mr. Lee’s proposed solution is more akin to complementing that rather than extending on it. Rather than trying to come up with a way in which we can possibly do better than AI, or add something to AI’s results which it cannot do itself, to him it is better to to work alongside AI and let us be complemented and also to complement it with our inherent humanistic qualities.

The problem is…

The idea itself is fine in my opinion, but the execution may not be so simple. I’m pretty sure there are many more people who are far more qualified than me to talk about AI and how to solve the possibility of us being runover by AI. But despite the many influential and capable people out there it seems the fear and worry has not gone away and no actual implemented solution is on the way. At best it’s because the problem is inherently hard to formulate and to solve in a way that can satisfy the vast majority of our world. At worst… it’s because we really don’t know think there are significant value we can gain from our inherent humanity.

Okay I doubt it’s as bad as I purport, but I do think the thought has some merit. At the very least I do think we don’t give enough objective and appreciation to our humanistic qualities. For example, I’m sure most if not all of us would agree that a great and caring parent goes a long way in the upbringing of a child and their future. A child that is brought up well and with great care probably has less risks to incur mental health problems which may be related to their environment. So those kids in the future can be healthier, and less cost is needed to maintain his or her mental health. A great parent also seems likely to enable their child to attain greater heights, and thus higher social contribution in the future. These are all valuable things for us and our society and have direct or indirect consequence to our economy, but I’m not aware of any widespread economic measurement that takes into account quality of parenthood.

There are many other humanistic qualities which I am sure we do value in our life, but are not given it’s due appreciation and value in an objective manner. Whether that objective quantification or assessment is in terms of monetary value, or recognition, or something else, I do think we are lacking in that aspect. This is a problem that is not inherent in AI but in us, in how we see, value and think of life, of ourselves. If we are to complement the value of our AI with value inherent in us, then we must be able to value our human capabilities and existence in ourselves and in others.

Wrapping up

I think Mr. Lee’s blueprint, though barebones, is something agreeable. Rather than battling our way through data and AI’s inherent ability to qucikly and accurately derive value out of data we should strive to look into ourselves ways in which we can bring about value through our own nature. As AI achieves feats which we have previously only associated with humans we would need to branch out to other areas, whether to compete or to co-exist with AI. But branching out would also entail our desire and ability to find value in those areas, a problem which I think we, as humans and society, need to solve by ourselves and re-align our values.