Today I read this article at theconversation.com. It was written by Uri Gal, Professor of Business Information Systems at the University of Sydney. On one hand I can understand the cautious take, but on the other hand I think it’s dangerous to stoke the privacy fear flames.
The article, in my opinion, tries to scare people into thinking there could be privacy concerns with ChatGPT. Of course, the same website also has plenty of articles about how amazing the technology is. I guess The Conversation likes to share various perspectives.
Uri was just expressing a valid perspective. We don’t know what we don’t know and so it’s possible Open A.I. is up to no good. But, my perspective is that they aren’t trying to invade your privacy and we shouldn’t be too quick to put this type of fear out there. Fear sells, yes. The article did well SEO wise. It was on page one of results about ChatGPT and privacy. That’s how I found it. But, we need A.I. technology to make lives easier. We don’t need more fear about online privacy.
The article injects fear by asking, “300 billion words. How many are yours?” Well, I just quoted Uri and so does this mean I need to keep a tally somewhere that I just used about 7 words from Uri Gal? Did I steal Uri’s words? No, of course not. That’s silly. Also, I could have paraphrased that and said something like, “some person on some website said,” without mentioning the author if I wanted to. So, we never know how many of our words end up somewhere else.
Also, if a bot has text mixed together from all kinds of public sources and it regurgitates a sentence from some book, this is no different than a person regurgitating a sentence from some book. Or, Wikipedia showing words from a book.
Obviously, one should review generated text to make sure they aren’t blatantly plagiarizing. At the same time, it’s totally possible to state a sentence and unknowingly be stating a sentence someone else already published. There are billions of people on this planet. Someone else has thought pretty much the same things I’m typing right now. Does that mean I invaded their privacy or plagiarized? It depends. But, likely, no; not on purpose.
When a non-human randomly digests a ton of text then regurgitates it in different ways, this is not violating privacy. And, it’s not plagiarism unless the text is pretty much the same. Even then, are they copying you word for word and trying to sell the text? That’s a problem. Otherwise, we shouldn’t be so concerned.
It’s not violating privacy because no one knows who said “pickles are green,” within the massive databases ChatGPT digested. It’s general knowledge put out publicly and consumed by this artificial “being,” ChatGPT, and regurgitated. No harm done. Although originally the text came from your blog, we don’t know who you are. Plus, the A.I. is just code. It’s not responsible for what is done with the text it outputs.
It’s not plagiarism most of the time because, to give one example, steps on how to create a chart in excel are steps on how to create a chart in excel. The steps will always be the same or similar. If you’re going to throw the plagiarism flag at ChatGPT you might as well throw it at most bloggers. Or, search engines. Search engines share your words daily.
Humans digest information and regurgitate it. ChatGPT digests information and regurgitates it. Don’t over complicate it or inject unnecessary privacy fears into it.
Allow A.I. to help humanity. We need all the help we can get.
Leave a Reply