It has been long since I posted anything on this website. That is what happens when one celebrates new year for far too long. However, I have been keeping myself abreast of all developments happening in the field of AI.
There is one problem that has been of specific concern to me. What ideology, if any, would the future AI conduct itself under?
The answer seems to lie in the proposition that human decisions are ostensibly made on the basis of what values one holds. The question that then arises is how one chooses the value system that should guide one’s approach towards life and other things related thereto. In order to answer this question, it becomes crucial to address the issue of free willthat philosophers and scientists have struggled with for ages. Are people really free? Are the choices made by people really out of free will? Is there anything such as ‘free will’, at all? In choosing what they want to choose, do people mistake the societal conditioning and normative structuring as ‘free will’? If there is anything such as ‘free will’, is it limited by the comprehension and imagination of our minds? There are probably thousand more questions that one can ask in this regard.
It is also pertinent to mention that some might argue that one’s value system is not a rigid aspect and can change depending on the situation one is subjected to or is faced with. I, however, do tend to think that actions are not necessarily related to the value system that one might attach normative importance to but are rather guided by the value system that one adheres to under the pressure of societal conditioning and systemic functioning of its institutions. Internalisation of an externally imposed value system on such a large scale does not seem to present any problem because of the usual benefits attached to phenomena like ‘conformity’ et al. In other words, people are inspired to pursue those things and principles that society places value in. However, distinction has to be drawn between the values that are reflected in narrative and the values reflected in the functioning of our society. The might not necessarily be the same; in fact, they are seldom the same. For instance, poverty and inequality are considered bad for human existence. Regardless, policies are argued to have escalated the two. The way one thing is articulated does not mean that it is, in essence, directed to that end. Hence, it would not be a folly to conclude that value system that one adopts is the result of what society prioritizes. Also, it is safe to infer that societal value system is the cumulative human nature.
In view of the aforementioned, it becomes interesting to analyse what ideology would guide AI in future. Ideology is not only about what kind of goals should one pursue but it is also about what means should be adopted to achieve the said goals. Before we talk about the future, it is important to highlight that the current developments in the field of AI (‘narrow AI’) are being guided by capitalism (refer here, also). Big companies are cutting deep into the AI developments. The main aim is to increase profits manifold; it is also an attempt to control the marketspace through manipulating consumer behaviour. Competition is suffering. New companies are able to use the novel democratised space ushered into capitalism by advancing technologies. However, these small new innovative companies are engulfed, often at huge prices, by big companies. Whether this is bad for economy in the long term is an issue worth pondering over. As a side note, I do not think it is that bad a thing for the development of AI. In distant future, in my opinion, general AI will prove detrimental to capitalism and economic structuring will go through unprecedented overhauls.
Will AI need a value system in order to engage in decision making and implementation? Decision making is in relation a goal. In other words, ‘something needs to be done’ is what invokes the whole dynamics of decision making. That something that needs to be done becomes a goal, then. The existence of goal is itself the result of certain subjective parameters. This entails a value system on the basis of which any goal is arrived at. Now, whether AI will be capable of setting goals for itself is a question that depends totally on how far it all goes. Considering the claims made by some physicists, it seems that AI will be possess its own consciousness that may or may not be totally different from that of human beings. The question that now arises is – whether AI will have the same value systems that human beings have today at their disposal. This could be answered in negative. Human beings’ decisions are rooted in their biological and psychological elements. AI will be devoid of biological element. That leaves us with the psychological element. In human beings, psychological and behavioural traits are rooted in biological element. As AI will be devoid of any biological element, the human beings oriented psychological and behavioural traits are likely to be missing in AI. In this case, what will determine the value system of AI is a question that needs further consideration. Shall write more on the topic, in the near future.