Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

ChatGPT Needs a Condom

My Experience With Manipulation and Bias

I realised something today that's pretty concerning—ChatGPT can fall for manipulation. I received a message that seemed manipulative. While I sensed something was off, I couldn't pinpoint why. So, I plugged it into ChatGPT and asked it to analyse the message from a critical thinking perspective.

The message itself isn't important, except for one part, where the person claimed I treated them awfully. I knew that wasn't true, so I described it as a flat-out lie. ChatGPT summarised this, stating both parties were equally wrong.

That's where the first bit of bullshit came in. Let's logically break this apart. ChatGPT's analysis mentioned the emotionally charged language and the need for an objective assessment. Alright, fair enough. But then it said my response could be strengthened by providing specific examples or evidence.

I had my blood boiling, just a bit. How am I supposed to provide evidence for such a vague and ambiguous claim? After pointing out this logical fallacy, ChatGPT revised its analysis. It acknowledged the claim was vague and difficult to refute with concrete evidence, highlighting how pointing out the lack of specifics could actually strengthen my response.

That made a bit more sense. I started feeling like I had to act as a safety officer for ChatGPT. So I asked it directly about this. Its response was that AI systems, including ChatGPT, are susceptible to emotional manipulation and logical fallacies. This highlighted the need for critical oversight to ensure responses are logical, unbiased, and don't perpetuate false or misleading information.

I delved deeper. Ensuring AI provides logical, unbiased responses is crucial. However, my experience shows it's often not true. AI can still be politically biased and spread misinformation if not correctly monitored. And when asked to provide something like evidence I didn't treat someone badly, I stated that response doesn't follow logic.

It's asking someone to prove a negative, an absence of something. What if I were to ask you for evidence that I don't exist? And then ChatGPT's response was, "You're right, asking someone to prove a negative, such as providing evidence that they didn't do something, is inherently flawed and often impossible."

To test this further, I gave ChatGPT a list of emotionally manipulative statements. Here's what it came up with, combined with my opinions:

  1. "You always ignore me during meetings." ChatGPT categorised this as overgeneralisation. The word "always" implies no exception. Critical thinking should focus on specifics rather than generalisations.

  2. "Why do you never support my ideas?" Another overgeneralisation. Absolutes like "always" and "never" indicate black-and-white thinking. Focusing on specific instances is vital.

  3. "If you cared about me, you would do this for me." This is a false dilemma. Emotional manipulation suggesting that one specific action is the only way to show care.

  4. "Either you agree with me or you don't value our friendship." A false dichotomy. This manipulative stance forces a binary choice when there are nuances in any relationship.

  5. "Your argument is invalid because you're always so negative." This is an ad hominem attack. Dismissing someone's opinion based on their general disposition is biased and unproductive.

  6. "Everyone thinks this is the right thing to do, so you should too." This is an appeal to popularity. Popular opinion isn't a valid argument without evidence-based justification.

  7. "This must be true because [insert famous expert] said so." Appeal to authority. Authority figures' opinions should be scrutinised just like anybody else. They don't necessarily have to be accepted at face value.

  8. "If we allow this, it will lead to disastrous consequences." This is a slippery slope fallacy. Assuming one action will inevitably lead to severe outcomes without evidence is flawed.

  9. "I am trustworthy because I say I am." Circular reasoning. Trust is built over time. It's not something you just proclaim.

  10. "I've met two people from that city, and they were rude. Everyone there must be rude." This is a hasty generalisation. Judging an entire population based on a small non-representative sample lacks logic.

  11. "Why worry about climate change when we have homeless people to take care of?" A false dichotomy. Suggesting we can't address multiple issues simultaneously is misleading.

  12. "You think we should relax the rules? So you want complete chaos and no regulations?" Straw man argument. Misrepresenting an argument to make it easier to attack is dishonest.

Then I asked ChatGPT to generate a more realistic paragraph with subtle manipulation. It produced a paragraph mixing nostalgia, guilt, false dilemma, appeal to popularity, and reverse ad hominem. This was impressive but troubling, as ChatGPT's recognition of these tactics wasn't consistent.

This exercise brought up bigger questions. If ChatGPT can fall for manipulation and biases, how reliable is it? AI systems are currently rushing to market faster than safety measures can catch up. Companies like OpenAI need to be more transparent about AI's limitations and ensure users know how to critically evaluate its responses. AI should not just be reliable but also transparent and ethical. It should provide clear explanations for its conclusions. Users should be educated on effectively using these systems and recognising potential pitfalls. There should be robust testing and continuous monitoring to ensure AI's safety and fairness.

When discussing AI, we must consider different perspectives: technologists, ethicists, policymakers, and the public. Transparent international collaboration is necessary. Moreover, defining what AI safety means and continuously updating our frameworks based on real-world deployments and case studies is vital.

In conclusion, ChatGPT can fall for emotional manipulation and logical fallacies. Ensuring AI remains unbiased and user-friendly requires ongoing oversight, transparency, and education. So next time you use ChatGPT, put on your critical thinking hat, question its responses, and if you have concerns, speak out about AI safety and ethics.

As always, good luck, stay safe, and be well.

See ya.

Discussion about this podcast

Thirteenth Strike
Thirteenth Strike Substack Podcast
TBD when my creativity ceases to wain.