In this epoch of AI, serious questions about regulations and laws for AI are being raised but
why now, and if not now, what will be the consequences. The question seems valid since we have been working and utilizing AI in many forms like probabilistic estimation studies of Economics to Video games for the last 70 years
after the name Artificial Intelligence was coined by John McCarthy in
1956. To answer this question, we need to see how AI evolved for the first 45 years as a set of Mathematical probabilistic theories to predictive generalization techniques like regression and classification. It’s not very difficult to understand
that what goes under the hood to solve these problems like tuning the hyper-parameters and minimizing the loss function. We can tune and control the parameters and easily nudge the responses which are mostly on expected lines, since the range of output are
always defined. Now what changed for last 20-25 years is the advent of Neural networks which managed to solve the more complex predictive generalization problems but basic concept behind the idea remains the same like tuning parameters and minimizing loss
function. We kept on increasing the depth and complexity of the networks which saw the advent of CNN and RNN around
2013 and then transformers architecture around 2019. The climax of the development is reached today by the
advent of Gen AI, when literally all the data that was out there on the internet was fed to GPT model to train. Something mysterious happened inside the neurons of the neural networks. The pre decided range of output of initial days that was
expected from general models changed drastically. The range of responses matches the comprehension capabilities of a human which truly is magnificent achievement and it’s only going to improve in future.
I would like to illustrate what that mysterious is. Recently I attended MLDS summit organized by Analytics India. The theme was related to Gen AI research and its industry applications. One of the speakers
Jaspreet Bindra had a very interesting perspective related of Gen AI and why it’s special albeit other advancement at different times that took place in AI. According to the him the thing that makes human beings different from other conscious
organisms is nothing but Language. Infact I would like to extend this idea much more broadly by claiming that Language is the bedrock of Human civilization. No settlement, civilization, scientific progress, culture and arts would be possible
the language, it the thing that connects one human to the other. It enables human to share, accumulate and communicate collective knowledge across time and space. What makes Gen AI truly magnificent is it’s
Language Intelligence. It is to this mysterious capability I was pointing to earlier which didn’t exist to this degree earlier. It’s safe to say Gen AI models like Chat GPT mimics this human capability to a great extent quite efficiently.
It replicates both good as well as bad traits of human behaviors which is expected as the data fed to it inorder to train is all generated by humans afterall.
This all makes it quite trickier to regulate since it’s very difficult to control the comprehension capability of model but at the same time without losing the human essence. Even defining what is the best response is very ambiguous because the concepts
of ethics and morality also vary across different human cultures and civilization. Some absolutists are also making the case for no regulation at all to preserve the ingenuity of responses which could restrict the industrial application of AI. One can’t expect
biased and poor response with inconsistency from the application they are going to use. Moreover, our past experience of non-regulation with new technologies are lesson in itself, let’s take the example of Social Media technology which is currently used by
everybody today and we cannot imagine our life without it. Already we can see it’s negative implications from cyber bullying to financial scams.
Some regulation frameworks
People are already coming up with different technological innovations to find a balance between the regulation without compromising the creative aptitude of the Gen AI models. I have personally experimented with two of these frameworks.
- Guardrails: It allows developers to control the response of the LLMs by wrapping the original API calls with lightweight wrapper known as Guard. The specification of the expected response can be configured and passed to the object definition
of the Rails. While it performed reasonably well with proprietary Open AI models but the support for the opensource models is still in development phase. Open-source guardrails like
Nemo-guardrails by NVIDIA do support open-source integration but it requires considerable hardware to get satisfactory performance and throughputs.
- Constitutional AI: Its working is like the same suggests, the developers can configure the configuration of the constitution that should be adhered by the response from the models. The same configuration is passed in form of constitutional
chains which generates the final response. The interesting thing about this concept is its ability of
critiquing the response of one Gen AI model with another Gen AI model. So multiple models could be passed in the form of chains where one will be the primary responder and the rest can be the critiques. In terms of performance and throughputs
I have to say I was happily surprised to see such a simple idea working so well. At the same time a lot can be improved upon and framework is still in early development phase. One limitation I would like to point out is the
problem of the interpretation of the constitution and similar analogy can be drawn with its comparison with the general law of society to explain it better. For common queries the constitution provides the clear mandate but for complex queries
and scenarios the interpretation of the constitution by the judging authority is required which creates ambiguity. Similarly in complex queries on constitutional AI it sometimes overlooks the need for critique and revision of the initial response. Furthermore,
the reliability of critiquing models could be questioned as well.
Can we wait for future?
We can argue for the case that we should first let the technology evolve itself and worry about the regulations in future. We are already seeing rapid evolution in AI and according to academic experts it’ll take around
2 decades or so for the AI to completely surpass human intelligence. By that time, we can expect AI in itself will do the major discoveries in different fields from natural sciences to abstract arts. We are already seeing a lot discoveries
now virtue of AI, particularly in domains medical sciences and astrology. Let me draw a picture for you, in time when AI surpasses the human intelligence in the era of
AGIs (Artificial General Intelligence) one can ask AI to reduce the emission of greenhouse gases, even with good intentions it might decarbonize the atmosphere and end up destroying the whole ecosystem since greenhouse gases are required more
than oxygen for the plants to grow and sustain life. The work required to set the standards and limits of AI impacting human life should begin early alongside the ongoing advancements in the field to avoid such future catastrophes. These principles can be
amended in future as per the needs of the society at that time. I don’t see absolutism without setting the ground rules benefitting us in the long run.
Conclusion
We have to keep in mind that the regulation problem is not limited to the general audience, the pioneers of Gen AI are themselves debating over the responsible AI and its formulation. In May last year in congressional hearing Sam Altman (CEO, Open AI) himself
requested the governments for regulating AI. A universally accepted AI regulations charter is the need of the hour. I can see that day as historic in terms of impact on human society as the formulation of
United Nations Human Rights Charter, which is seen as success of collective human will to address a complex problem.