back Back

Why Accountability is Critical When Using AI Tools

Feb. 21, 2023.
1 min. read. . 0

About the Writer

Lewis Farrell

8.60178 MPXR

Highly curious about things that increase my awareness, expand my perception, and make me open to being a better person.

As AI permeates our daily lives, it generates a lot of answers and saves a lot of time, but it’s often insufficient or untrustworthy. While AI is generally accurate, trusting the results AI provides requires a fair amount of expertise in a given field. People must still accept ultimate responsibility for whatever the AI suggests, which necessitates determining whether the AI’s suggestions are any good.

This is not to disparage AI, but to emphasize that people cannot blame AI for the outcomes of their use of AI. People are responsible for whatever they choose to do with AI, and they must accept responsibility for the outcomes. This is especially important when using AI tools for work, for example check out GitHub Copilot.

It’s a good thing that Copilot can be trusted most of the time, but that’s also the issue: it can only be trusted most of the time. You must be an experienced developer to recognize when its recommendations cannot be trusted. So, until we have complete confidence in its outcomes, experienced people must keep their proverbial hands on the wheel.

To summarize, while AI can be extremely beneficial, it is not infallible, and people must accept responsibility for the outcomes of using AI. When using AI tools like GitHub Copilot, developers must have the expertise to know when to trust its recommendations and when to take control themselves.


Interesting story? Please click on the 👍 button below!

💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this botton; you can only select one article every month.