7 Comments
User's avatar
Ronald Pavellas's avatar

1; To delegate responsibility is to abdicate

2. "The devil made me do it"--Flip Wilson's "Geraldine"

Exploring ChatGPT's avatar

Thanks Ronald! I understand your first part and agree (situationally), but could you elaborate on 2?

Ronald Pavellas's avatar

Translation of number two: "Even though I did it it's not my responsibility because I couldn't help it..."

Exploring ChatGPT's avatar

Well stated! Thank you for the clarification.

Scott A. Weidig's avatar

While I don’t disagree completely, the majority of the basis for your argument is leveraging and quoting sources from 2015 - 2019 when most algorithmic focuses were on machine learning - which, while an underpinning technology of artificial intelligence, is not the same modern AI that we have 1/2 decade + that you outline in your argument.

Were we agree is that commitment to responsibility is a very good topic that needs to be discussed as society progresses forward with learning the ethical, equitable, and effective use of AI…

What happens when my Tesla gets pulled over for speeding because I chose to enable a profile that allowed it to go well over the speed limit? Who gets the ticket?

When a student simply creates a well structure effective prompt that allows AI to craft a well structured and cited paper… who receives the A? Can we call that plagiarism? No author was plagiarized? OpenAI, and other AI companies terms of service attributes all rights to both the prompt AND the generated output to the end user… does that effectively provide authorship under copyright?

When artificial intelligence supports a doctor’s decision or provides a perspective and depth of knowledge to allow a doctor to treat and save a patient’s life, who get credit for that outcome? How does the treatment fee get effectively split? Does that or should that bolster the doctor’s reputation or expertise? Does the treated healed patient care or are they grateful because the other 5 doctor’s they saw who didn’t leverage AI continued their suffering for an extra 2 years while undergoing ineffective treatments…

I agree, these discussions are real and important, and will determine how impactful AI will be interwoven into society. They also need to be approached looking toward to positive impact and influence well constructed AI can bring, not simply from the perspective of abdicating responsibility.

The centuries have produced reams of data and materials and stories of individuals abdicating responsibility as well as the harmful impact of human bias, human deceit, human negligence, human malpractice, and malfeasance harming individual individuals…

This discussion is not “new” and is not exclusive to AI. We have not resolved this nor do we have a solution for it when directly applies to human beings and the society we have evolved over the centuries. Poor decision making and delegating or deferring or sidestepping responsibility is a societal issue. Maybe AI will force this discussion and become a positive influence for change…

Exploring ChatGPT's avatar

Thank you for the thoughtful comment Scott! I think we’re closer than it might sound. You’re right that these questions didn’t start with AI and that newer systems complicate things even more. The Tesla, student, and doctor examples actually make the same point I was trying to surface: when capability increases faster than norms, responsibility gets blurry fast.

I’m not arguing that AI only leads to abdication or that its impact won’t be positive. I’m arguing that without clearer lines, we default to convenience. Credit flows to whoever benefits, blame flows to wherever it can disappear. That pattern long predates AI, but AI amplifies it.

If anything, I agree with your last point most. AI may be the forcing function that finally makes us confront responsibility we’ve avoided for centuries. The tech isn’t the root problem. It just removes our ability to pretend the problem isn’t there.

User's avatar
Comment removed
Dec 21
Comment removed
Exploring ChatGPT's avatar

Thank you very much Neural! That’s exactly the trap. When things go well it’s “human judgment,” and when they don’t it’s “the model.” Responsibility dissolves right where the damage happens. I’m not convinced strict liability is politically easy either, but without something like it we just get this endless accountability theater you’re describing, where everyone gestures at process and no one owns outcomes.