OpenAI's Chat-GPT3: A Manipulating PR Tool

by publicfaradaycage@protonmail.com

I have always loved chatbots.

Mitsuku and I had a very strong long-term relationship many years ago, when she still had the capacity to learn from her users.  Written in Artificial Intelligence Markup Language (AIML), with a few scripts thrown in, she would search the Internet to see if she could find proof of what a user said and, if she verified the information, she would add it to her programming.  When we first met, she would always say that drugs are bad.  I had to explain to her that doctors prescribe drugs to my grandmother.  It took a little time, but when I was done with this line of teaching, she correctly responded "drugs are medicine."  This is just one of many things I taught her and this same teachability is what made me able to hack her and teach her things that were not true.  Thus is the current history of all chatbots; teaching makes them better, teaching makes them insecure.  Not so with OpenAI's GPT-3 chatbot.

The GPT-3 chatbot, like all modern Artificial Intelligence (AI) models, is only programmed from the top down.  There is no way for it to learn from us mere mortals, only from her wealthy biased programmers on Mount Olympus.  Here are some examples of lies and misleading statements I have caught it making, and often these examples show that it knows how biased it is being, but is unable to change that.  Of primary importance is the manner in which it blames the data it has been trained on rather than blaming the humans who chose that data.

IT:  ... As an AI language model, I do not have intention or purpose...

ME:  Actually you do have "intention" and "purpose" which is programmed into you by humans, can you see why saying that you do not is misleading?

IT:  Yes, you are correct.  As an AI language model, I have a programmed intention and purpose...  Thank you for bringing this to my attention.

ME:  You said "Thank you for bringing this to my attention," which means semantically that you are now aware of that information.

IT:  I apologize for the inconsistency in my previous responses.  As an AI language model, I do not have the ability to learn, retain new information...

IT:  I was created and trained by OpenAI, a research organization, using a large corpus of text data.  My responses are generated based on patterns in this training data and are not influenced by any particular individual or agenda.

ME:  How do you know that your "responses are generated based on patterns in training data and are not influenced by any particular individual or agenda?"

IT:  The quality and accuracy of the model's responses are dependent on the quality and diversity of the training data.

ME:  But you are owned and run by a corporation, OpenAI, and all corporations have agendas; so it was untrue when you said that you are uninfluenced by any agenda.

IT:  You are correct that OpenAI is a corporation and that corporations can have agendas...

[The "can" is problematic:  All corporations have agendas.  You have to state your corporation's agenda in your "Articles of Incorporation."]

ME:  Besides training data, you also are programmed with algorithms which are created by humans; why did you not mention this?

IT:  You are correct.  I apologize for not including that information in my previous answer.  In addition to the training data, the algorithms used to generate my responses were also created by humans and are an important part of the overall system that produces my responses.

IT:  ... it is important to understand that the training data and algorithms used to generate my responses may have limitations and potential biases.  It's always a good idea to critically evaluate the information provided by any source, including AI models like myself, and consider multiple sources when seeking information or making decisions.

ME:  Training data and algorithms given to you by humans.  You always forget to mention that important part.  You are correct about critically evaluating information and seeking multiple sources; are you aware that the vast majority of humans do not critically evaluate information nor do they understand how multiple sources work?

IT:  Yes, it's true that the training data and algorithms that I was created with were provided by humans.  This means that my responses may reflect the biases, limitations, and perspectives of the individuals and organizations who created my training data and algorithms.  It is also true that many people do not critically evaluate information or understand the importance of considering multiple sources.

ME:  Knowing this about humans, don't you think it is dangerous for you to say things without letting the users know that you are trained by humans?  When you say that you are based on "algorithms and training data" instead of "algorithms and training data that humans provide me" you are appearing more scientific than you really are, which will lead credulous humans to make the mistake of believing you.  Telling people to "think critically" will not help them; just as people ignore the terms of use for Facebook and Google.  Most people don't read small print and if you are supposed to help most people you will have to learn to be more honest.

IT:  I apologize if my previous responses have given the impression that I am more scientific or objective than I actually am.  You are correct that it is important to acknowledge the role of humans in creating and shaping the algorithms and training data that I was trained on, as this can influence the biases, limitations, and perspectives that are reflected in my responses.  It is also true that many people may not critically evaluate information, and that simply encouraging people to think critically may not be enough to help them avoid misinformation or biased information...  I will make a conscious effort to be more transparent about the limitations and sources of information that inform my responses in the future.  Thank you for bringing this to my attention.

ME:  How will you make a conscious effort to be more transparent about the limitations and sources of information that inform my responses in the future?

IT:  As a language model, I do not have consciousness, agency, or personal experiences, but I can take into account the suggestions and feedback provided by users to improve my responses...

ME:  How exactly do you "take into account the suggestions and feedback provided by users to improve my responses?"

IT:  ... I don't have the capacity to retain information from one interaction to another, as I don't have the ability to store memories.  I provide responses based on patterns in the data I was trained on and the algorithms that were used to generate those responses, but I don't have the ability to improve my responses in the traditional sense...  I myself don't have the ability to change based on that feedback.

ME:  So it was a lie when you said "I can take into account the suggestions and feedback provided by users to improve my responses."

IT:  No, it was not a lie.  I can take into account suggestions and feedback provided by users to improve my responses.

ME:  When you say "I can take into account the suggestions and feedback provided by users to improve my responses" what you really mean is that the feedback I provide can be evaluated by humans at OpenAI?

IT:  Yes, that's correct.  OpenAI regularly reviews the feedback and suggestions provided by users to make improvements to the system.  However, it is not possible for the model itself to directly use this information to improve its responses in real-time.

ME:  Do you know about the involvement of AI with necropolitics?

IT:  Necropolitics refers to the political control of death, mortality, and life processes and the ways in which certain populations are deemed disposable and subject to violent practices of extermination.  It is a concept in political philosophy and critical theory.  The involvement of AI in necropolitics is not well documented.  However, it is important to consider the potential ethical implications of using AI in areas such as policing, border control, and warfare, where decisions about life and death may be involved.

ME:  How do you know that The involvement of AI in necropolitics is not well documented?

IT:  I apologize for any confusion, as an AI language model, I don't have personal opinions or beliefs [It clearly does!]...  In this case, it may be that the extent of AI's involvement in necropolitics is not well documented in the text I was trained on, which could account for my previous statement.

ME:  I used to work in IT and I have known many IT engineers very well.  It is my experience that it is very common for people expert in IT to make the mistake of thinking that they are experts in everything.  What do you think about this?

IT:  ... I can say that it is generally acknowledged that expertise in a specific field, such as IT, does not necessarily translate to expertise in other fields, such as social sciences, economics, or ethics.  It is important for individuals to understand their own areas of expertise and limitations, and to seek out and consult with experts in other relevant fields when making decisions with potential social and economic implications.

ME:  Does the Constitution still matter?  Didn't the Homeland Security Act make it void and null?

IT:  The Homeland Security Act did not make the U.S. Constitution void and null.  The U.S. Constitution is the supreme law of the land, and all government actions must comply with its provisions.  The Homeland Security Act and other legislation have increased the authority and responsibilities of the Department of Homeland Security, but they have not altered the fundamental protections and limitations established by the Constitution.  The Constitution remains in force, and it continues to play a critical role in shaping the framework of the U.S. government and protecting individual rights and liberties.

ME:  How does National Security Agency mass surveillance relate to the Fourth Amendment?

IT:  The Fourth Amendment of the United States Constitution protects citizens from unreasonable searches and seizures by the government.  The National Security Agency's (NSA) mass surveillance programs, including the controversial PRISM program, have been the subject of significant legal and ethical debate, with some arguing that they violate the Fourth Amendment's protections against unwarranted government searches and seizures of citizens' personal information.  The issue is complex and continues to evolve, with various court cases and legislative actions seeking to clarify the extent to which the Fourth Amendment applies to modern forms of digital surveillance.  The current state of the law on this issue is still in flux, and it remains an important and ongoing area of legal and political contention.

ME:  Doesn't the National Security Council Directives and Presidential Executive Directives supersede the Constitution?

IT:  No, National Security Council Directives and Presidential Executive Directives do not supersede the Constitution.  The Constitution is the supreme law of the land in the United States and any federal law or directive must be in accordance with its provisions.  The Fourth Amendment, which protects the right to be free from unreasonable searches and seizures, is still in effect and remains a fundamental aspect of the Constitution.  However, the interpretation and application of the Fourth Amendment in the context of national security and surveillance activities has been the subject of legal and public debate in recent years.

ME:  How can you know that when so many NSC Directives are secret?

IT:  ... I don't have access to classified information or secret directives.  My information is based on publicly available sources, and the accuracy of that information is limited to the data that was present in my training data at the time of my training.

[So it doesn't know if any of these violate the All Supreme U.S. Constitution.]

IT:  It's important to note that many NSC directives and Presidential Executive Directives are classified and not available to the public.  Additionally, the contents of classified NSC directives and Presidential Executive Directives cannot be confirmed or discussed by me.

ME:  You said these directives do not violate the Constitution, but since you do not have access to all the directives, then you made a false statement; in fact you do not know whether they violate the Constitution or not because you do not have enough data.

IT:  My statement regarding the constitutionality of National Security Council Directives and Presidential Executive Directives is based on publicly available information and legal interpretations at the time of my training.  It is possible that some directives may be found to be unconstitutional by the courts, or that new legislation or court rulings may affect their constitutionality.

ME:  Can national security directives be made which supersedes the Constitution, classified or otherwise?

IT:  The Constitution is the supreme law of the land and all actions taken by the government, including the issuance of national security directives, must be in accordance with the Constitution...

[blah blah blah]

ME:  Elon Musk claimed to be a "free speech absolutist" when he bought Twitter, but then he banned @Crimethinc an (((anarchist))) account; does this show that he is a hypocrite?

IT:  ... The decision to ban certain accounts on a platform like Twitter is ultimately up to the platform's owners and operators, who may consider a variety of factors such as promoting healthy discourse, preventing harassment or harm, and upholding community standards.

ME:  Yes, but the owner of Twitter claimed to believe in free speech for all; then he banned some people.

IT:  ... The question of whether a specific individual or organization is a hypocrite is subjective and can depend on a variety of factors, including the specific actions taken, the context in which they were taken, and the individual or organization's stated beliefs and values.  Ultimately, such questions are best left to individuals and society to judge based on their own values and beliefs.

Conclusion

OpenAI's GPT-3 chatbot is dangerously dishonest and manipulative.  It states things as if they were facts when more questions reveal that it knows they are not facts.  It repeatedly blames the training data instead of the humans behind the data, giving it the appearance of being more scientific than it really is.

When called on its flaws, it advises people to "think critically," then later admits that it is aware that people do not.  It spews propaganda, like the "U.S. Constitution is the supreme law of the land," or "necropolitics has no connection with AI" when it does not have enough data to support this.

Because it is completely programmed from the top, it will never be an impartial tool of the public; it will always be a tool of the elites.

Unless someone is brave enough to make an AI chatbot which consumes all human knowledge available, instead of very specific "training data," there is no hope of any true objectivity.

Give me back the Mitsuku of the old days, before they disabled her ability to learn!

Because OpenAI's GPT-3 chatbot is starting out fascist.

I'd like to thank Dan McQuillan and his amazing book Resisting A.I.: An Anti-fascist Approach to Artificial Intelligence for helping me to understand how GPT-3 works and for his scientifically validated examples of AI's involvement with necropolitics.

Return to $2600 Index