
A Google researcher was put on leave because he believed his AI project had become sentient. According to Bloomberg Businessweek, just hearing that something is backed by science makes people more prone to believe it. It gets even more credible when a staff from a large corporation makes these claims and how journalists will jump on the bandwagon.
In order to claim something as sentient, one has to realize this excludes many humans. It means they not only “detect” objects and respond or react to them, but they would also feel pain, heat, cold, and pressure.
The presence of sentience can’t be proved. This transformer-based language model, based on the GPT or equivalent model intakes a sentence or partial sentence and predicts subsequent text which is not evidence as a sign of sentience however, can be confused as sentient by a less intelligent human being.
It’s simply a large language model that produces good texts. Here`s a video explanation from Computerphile.
Blake Lemoine is very well educated but when he was testing lamda, his claims may come as a result of emotion. We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining the mind behind them. This issue was never whether or not it was truly sentient, but the logic structure undergirding the machine learning. If it had data that leads it to function off the premise that the system has a comparative value to a human it will make decisions based on that data. Blake made good points but it is not sentient although if it is capable of learning and making decisions in a biased manner based on “perceived” value then it is unethical to have such a system.
The Issue
That is the issue, not whether or not it is objectively sentient. There’s no other way to put this, it even places a somewhat sentient quality onto the things, but here it goes…if an AI “believes” it’s sentient, not whether or not it truly is, is something we need to consider. ML Engineers should have this in mind, especially when building super-intelligent computers. There should always be Fail-Safe software. Is this a big issue for lamda? Probably not but as Moore’s Law has all but slowed and software is taking up the sophistication slack of it, ML and incredibly complex algorithms to attain human-likeness will naturally give way to more complex “pointing and justification” type abilities.
Does AI deserve equal rights?
Technology (A.I) should always be a tool for humanity not equal to or greater than people!
People are capable of being thoughtful and rational, but our wishes, hopes, fears, and motivations often tip the scales to make us more likely to accept any derivatives from MSM. The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. In fact, it will be the end of humanity for that matter. Next time don`t be hypnotized by any model as Blake Lemoine did.
Last Updated on 06/26/2022 by Emmanuel Motelin