Is using ChatGPT cheating? How ChatGPT helped me write a blog post

In this post, I explain how I used ChatGPT to help me write another post about ChatGPT (Could ChatGPT end up raising the quality of writing and public discourse?) and discuss if this counts as “cheating”.

How I used ChatGPT to write a recent post

First, let me clarify that ChatGPT’s draft for the post in question required less editing than average. At first, I wasn’t sure if I should write this post at all, lest people think I rely on ChatGPT so heavily in all my writing. ChatGPT has also generated many drafts that I threw out and rewrote, largely from scratch. Sharing those wouldn’t be very interesting. There are other posts that I probably could have outsourced more to ChatGPT, but chose not to, because I wanted to develop my thinking through the writing process itself.

So I ended up going with this one, because it’s the best example I have (so far) of using ChatGPT in my writing, though it’s by no means the most represenative example.

My prompt

Here’s the prompt that I started with:

The prompt I entered into ChatGPT

ChatGPT told me this was an “interesting and valid line of reasoning” (it went into some further detail that largely reiterated what I’d said more fluently). Then I asked ChatGPT to draft a blog post along the lines of what I’d said, and to make it interesting and thought-provoking. I also instructed it to use a casual tone and add in references and other examples as appropriate.

ChatGPT’s draft

Here’s what ChatGPT came up with:

ChatGPT's first draft in response to my prompt

Now this is pretty good. There was a fair bit of back and forth after this, which I won’t repeat in full here. I also followed up some of the references that ChatGPT produced, to check that they actually exist.

But as you can see, there are quite a few lines from ChatGPT’s very first draft that ended up verbatim or near-verbatim in my final blog post. Looking back, I’m rather surprised at just how many lines went unchanged.

Is writing with ChatGPT “cheating”?

In my opinion, no. The major exception is if you have represented in some way that the work is entirely yours, such as in an academic setting where there are strict rules on plagiarism and fair attribution. My comments in this post only concern the world outside academia.

Should writers at least specify when they have used AI in their posts?

Honestly, I expect we’ll quickly move to a world where most posts will be written with some degree of AI assistance, making such a disclaimer meaningless.

We don’t expect writers to tell us when they’ve used tools like spell-check or Grammarly, and it’s not reasonable to expect writers to disclose when they’ve used AI in a similar low-level way. Nor do I think anyone really cares if a writer uses AI very sparingly.

What if we make writers disclose how much AI they have used?

I can definitely see some merit in this idea. It seems desirable for readers to be able to know when they’re reading something written by an AI (this is partly the reason I’m writing this post at all).

Unfortunately, I think this will be very difficult to implement in practice.

Let’s put aside the idea of enforcement for now, and just try to think through a standard that ethical writers might voluntarily adhere to. How exactly does a writer disclose “how much” AI they used?

A subjective standard whereby the writer has to disclose if AI had “significant” or “more than minor” input would be incredibly hard to apply. People have very different ideas of what ambiguous terms mean, interpreting “real possibility” to mean anything from 20% to 80% chance!

An objective standard would be easier to apply — perhaps we could expect writers to disclose when they’ve used AI to write at least, say, 50% of their post?

However, I’m not sure this will help, either. Not every word, or every sentence, carries equal weight. Some words and sentences are relatively generic — they merely set the scene for what’s to come, or summarise what’s been said. Even without them, the post would still make sense. Other words and sentences are crucial. Without them, you’d have an entirely different post, or no post at all. Moreover, there’s generally a trade-off between:

  • The amount of detail you put into a prompt; and
  • The amount of editing you have to do afterwards to get it to reflect your vision.
    So if you give ChatGPT a very detailed prompt, you won’t have to do as much editing afterwards.

My prompt in the example above was reasonably detailed as it provided the core ideas and direction for the post. As a result, I didn’t have to edit too much. If I’d used a generic prompt like “draft a blog post about how ChatGPT will affect the quality of writing and public discourse”, I’d have had to make substantial changes to make the post reflect my views.

Ghostwriting is nothing new (in the business world)

To put this discussion about AI-generated writing in perspective, we must remember that ghostwriting is nothing new. In the business world, ghostwriting has been going on for ages.

Company executives, partners, and managers frequently sign off on memos and reports that their underlings prepared for them. Politicians give speeches written by their professional speechwriters. Even judges sometimes issue judgments their law clerks largely drafted. This is nothing new.

Ghostwriting is nothing new

Ghostwriting seems less prevalent in the sphere of bloggers and influencers. I assume this is because are blogs are more personal, and are often hobbies or passion projects — there’s little point in paying someone else to do your hobby for you! However, successful bloggers and vloggers do hire writers to draft content and scripts, yet few would call this “cheating”. Seen this way, ChatGPT merely makes ghostwriting services accessible to many more people.

You’re always responsible for your posts

Now, the fact that ghostwriting has been happening for a long time doesn’t make it ethical. But here’s why I think ghostwriting is okay: When you sign off on something, you are responsible for it.

When I first started working, I felt slightly put off by how seniors seemed to “take credit” for my output. I’d think to myself, “I wrote 95% of this. They just changed a few words and signed it off!”

What I didn’t realise was that the real value lay in those few changed words and added signature. That’s what my seniors were being paid to do. They weren’t being paid to generate text. Generating text is easy. They were paid to pick up my mistakes and exercise judgment.

Contrary to what some may believe, signing off on something doesn’t mean you’re saying to the world, “I wrote this”. It means, “I agree with this piece of work enough to put my name on it”. If a partner signs off on a law opinion, they are vouching for the accuracy of the advice in it. They are legally liable for it and can be sued if the advice turns out to be wrong, They can’t turn around and say, “well, actually, my junior wrote that one”. Similarly, when a politician gives a speech, they’re responsible for every word they utter. If people criticise them for making a racist or sexist remark, they can’t just blame their speechwriter.

If anything, using a draft that ChatGPT generated is more ethical than signing off something that an underling drafted. After all, the underling is a sentient being and might feel upset if they don’t get enough credit. Whereas, as far as we know, ChatGPT does not have any feelings.

Bringing this back to blogging: even when ChatGPT helps me draft a post, the post must ultimately reflect my views.

Conclusion

To summarise:

  • I’ve been using ChatGPT to help me write some of my recent blog posts;
  • I don’t think that this is cheating, nor do I think writers need to make clear in every post how much ChatGPT helped with it. Ghostwriting is nothing new.
  • Ultimately, the person signing off on the work has to take responsibility for what it says. For me, it means occasionally using ChatGPT generate some drafts — but still making sure that every post reflects my views.

Finally, in case you were wondering whether I used ChatGPT to write this post? No — just because this post flowed rather easily for me. I did ask ChatGPT to help me check it, but I didn’t choose to take on most of its suggestions.

Do you believe writers should give disclaimers when they’ve relied on AI? If so, what kind of disclaimers? Has your view of my earlier blog post changed now that you’ve seen how much ChatGPT drafted? Share your thoughts in the comments below!

7 thoughts on “Is using ChatGPT cheating? How ChatGPT helped me write a blog post

  1. One way of thinking about this is thinking about the purpose of a phrase lthat you’ve probably come across in academic or pseudo-academic writing:

    “I am indebted to conversations with X and Y for the thoughts here. All mistakes are my own”

    I think that these two sentences are purely for social reasons, and those reasons don’t exist with Chatgpt.

    Why does the writer feel the need for the first sentence? I think it’s because they are worried X or Y will read the piece and think that the writer is stealing their ideas without attribution.

    The second sentence plays a similar social role: the writer is worried that if they make a mistake (or even if they just inadvertently expose a mistake of X or Y), then X and Y will be annoyed that they have been misrepresented, or that their casual thoughts with errors they hadn’t recognised get broadcast to a level they are not comfortable with (or would only be comfortable with had they had more control over the phrasing).

    Outside of the social interpersonal world, I don’t think that disclosing you used chatgpt to help develop thoughts and the writing for a blog post serves any purpose. I think saying that you used chatgpt will be similar to saying that you used a Casio calculator for some of your maths. I agree that the most important thing is that you are taking responsibility for what’s in the text.

    PS: the most fun things I’ve done with chatgpt are similar to your starting prompt, where it almost feels like a conversational partner who is always eager to help your thinking out and has an almost endless (given token limits) capacity to take more and more context and provide more and more refined thoughts.

  2. I agree that those standard attributions you mention only serve a social purpose. Attributions can also point the reader to the work of those that the author has relied upon (e.g. I’ve seen a lot of people refer to Philip Tetlock’s work on Superforecasting, making me want to read his work directly). But apart from that, I don’t see why readers care about attributions.

    And yea, I agree with your last point – I’ve generally far preferred using ChatGPT as a conversational partner than as a draft generator.

  3. I completely agree with your assessment. Thinking more on this, I do have two bigger picture questions.
    1. Do you feel any sentiments of sorrow at the idea that writing becomes a “lost art”, as knowing how to write becomes replaced with new skills of AI prompting and review?
    2. I wonder how this conversation will be changed as society empowers AI to act as autonomous agents in real world processes (i.e., there is no reviewing and “signing off on” an AI-generated email, AI will just generate and submit the email all on its lonesome).

  4. 1. I’m not sure writing will become a lost art, though what “writing” is will certainly evolve and some types of writing will decrease. I think humans have a deep desire to express themselves, and writing is one way to do so. Like, we can have really good AI-generated art, but there’ll always be people who will create art for fun.

    That said, I think much of the writing that people have to do for work rather than for fun will more quickly be replaced by prompting and reviewing. However, prompting and reviewing is not easy to do well, either (of course, many people won’t do it well and will skimp on the “reviewing” stage in particular). Over time, I’m sure we’ll all get better at it, and AI will improve at understanding what we want. But until then, I think many people will prefer writing because it’s what they’ve always done.

    2. I’m not quite sure what you mean here. Are you talking about a situation where we have AGI and we give them much of the same legal rights that humans do? If so, I’m not sure I know enough about AGI to comment – I’ve heard some experts concerned that will mean the end of the human race as we know it, or at least the end of jobs, but I just don’t know enough about the technology to assess those claims.

    However, assuming humans don’t go extinct and jobs still exist, I still think there’ll be reviewing and signing off on AI-generated emails. Managers today sign off on stuff generated by their staff, even though their staff members are autonomous agents.

  5. I don’t think I worded my second point very clearly, and the email example is pretty poor.

    My point isn’t necessarily about AGI. Think of it like this: you may hire a new secretary. At the beginning, you’d likely want to review and finalize all their actions; the trust isn’t there. But eventually, the trust is gained, and you’d begin delegating certain tasks to the human secretary without any supervision or final review. As the secretary continually displays competence, you’d delegate more and more responsibility to them.

    AI, I think, can be viewed similarly. Right now the trust isn’t there (it’s still new, it hallucinates, etc.). But as the technology improves, the trust will be gained to delegate more and more responsibility to the AI. The major difference is that the AI is (theoretically) capable of much, much more.

    I wonder if it eventually would make sense to consider AI as their own separate agents, rather than merely a tool of the human user who has full responsibility. This could even be viewed through a legal lens in talking about liability, although that’s certainly not my domain

  6. Thanks for clarifying. From what I understand, I think you’re right to suggest that AI might eventually become autonomous agents. That world would be so different from what we have now, and so different to anything that’s happened before, that it’s hard to make anything but wildly speculative guesses (at least for me, as I have no expertise in this area).

    Legal liability is an interesting area – what would liability for an AI autonomous agent even mean? How would we punish them? You can’t imprison them. Can you fine them? What would the AI agent “care” about? We have to figure that out before we can impose any liability on AI.

    Until then, I think we either have to hold the human user liable – or no one liable. This isn’t all that unusual. Many offences and crimes are committed with no one ever found liable for them, simply because laws are hard to enforce in practice (particularly online). So it’ll just be “reader beware”.

  7. The article rightly highlights that using ChatGPT is not cheating; it’s a tool that can enhance one’s writing capabilities. It’s akin to collaborating with a talented co-writer, offering ideas, inspiration, and a different perspective. ChatGPT can help overcome writer’s block, suggest new angles, and even assist in refining the content. Thank you for sharing this good knowledge!!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.