top of page
  • Marianne Calilhanna

AI: Rewards and Risks

Rich Dominelli is a system architect at DCL. Rich recently sat down with The Content Strategy Experts Podcast and spoke with Scriptorium's COO Alan Pringle. They chatted about one of the biggest topics of the year–AI. Have a listen!





“I feel like people anthropomorphize AI a lot. They’re having a conversation with their program and they assume that the program has needs and wants and desires that it’s trying to fulfill, or even worse, that it has your best interest at heart when really, what’s going on behind the scenes is that it’s just a statistical model that’s large enough that people don’t really understand what’s going on. It’s a model of weights and it’s emitting what it thinks you want to the best of its ability. It has no desires or needs or agency of its own.”

— Rich Dominelli

 

Transcript

Alan Pringle: Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. Hi everyone, I’m Alan Pringle. In this episode, we are going to tackle the big topic of today, artificial intelligence, AI. And I am having a conversation with Rich Dominelli from DCL. How are you doing, Rich?

Rich Dominelli: Hi Alan. Nice to meet you.

AP: Yes. We have talked back and forth about this and I expressed I have a little bit of concern about touching this topic. There is so much bad coverage on AI out there right now. Click bait-y, garbage-y headlines, breathless reporting, and I’m hoping we can kind of temper some of that and have a discussion that’s a little more down to earth and a little more balanced. And let’s start talking about what you do at DCL and then we can kind of get into how AI connects to what you’re doing at DCL.

RD: Sure. As you know, Data Conversion Lab has been around since 1981, and we are primarily a data and document conversion company. So my role at DCL is a architect for the various systems at DCL that covers a wide variety of topics, including implementing workflows, doing EAI style integrations to obtain new documents, and also looking for ways of improving our document conversion pipeline and making sure that conversions are working as smoothly and automatically as possible.

AP: And I’m hearing a lot about automation and programming and I can see AI kind of fitting into that. So what are you seeing? How are you starting to use it? And you may already be using it at DCL.

RD: So AI is a very broad term and I feel like it’s something that’s been kind of shadowing my career since the dawn of time. Back in the Reagan era in the 80s when I was graduating from high school and looking to start my college career, I was told at the time not to enter computer science as a field because computer programming had maybe two or three years left and then computers going to be programming themselves with case tools and there won’t be any careers for computer programmers anymore except a couple of people here and there to push the button to tell the computer to go. That obviously hasn’t panned out.

AP: No.

RD: Although I feel like every few years this is a topic that starts cropping up again. But at DCL we have used what we would call machine learning more than AI. And I guess the differentiation there is machine learning is using statistical analysis to process things in an automated fashion. For example, OCR and text to speech were both pioneered by Ray Kurzweil.

AP: And OCR is, just for the folks who may not know.

RD: Sure. Optical Character Recognition. Taking text or printed words or even handwriting and analyzing it and generating computer readable text out of it, taking that image of a file and converting it to text. So as I said, Ray Kurzwell did some early pioneering work on that in the late 80s, early 90s, and eventually worked on models of the human mind and comprehension. And I think that’s what people are envisioning now when they say the word AI. But even the panorama mode in your camera is a version of machine learning and AI. It takes the ability to stitch images together smoothly and processes that automatically.

Other places at DCL where we do use AI on an ongoing basis is we do natural language processing, looking at unstructured texts and trying to extract things like references, locations, entity recognition where we have a block of texts and buried in that block of texts is a reference to a particular law, or a particular document, or a particular location or person. So that type of work we’ve done. We use it for math formula recognition. So if we have an academic journal that has a large amount of mathematical formulas, for example, we do some work for the patent office and patent applications frequently have mathematical or chemical formulas in them.

AP: Sure.

RD: Putting that information out and recognizing that it is there to be extracted out would be an application of AI that we use all the time.

AP: With the large language models that we’re seeing now, a lot of them are kind of reaching out and people can start experimenting with them. What are you seeing in regard to those kinds of situations? I don’t know if public facing is the right word, but the stuff that’s more external to the world right now.

RD: It is certainly the most hyped aspect of AI right now.

AP: Exactly.

RD: … where you can have a natural language conversation with your computer and it will come back with information about the topic you’re looking for. And I think that it has some great applications for things like extracting or summarizing text. It’s a little risky though. For example, I have a financial document, a 10K form from IBM. Buried in that document is a list of executive officers and a statement of revenue. And I ask ChatGPT, “Given this PDF file, give me a list of executive officers.” And interestingly enough, it does come back with a list of executive officers, but it’s not the same list that appears in a file. It’s a list that it found somewhere else in its training data. When I say please summarize the table on page 31, it does come back with a table, but the information that appears on it is not what is on that page of the PDF app. And in the artificial intelligence world, this is called a hallucination. Basically the AI is coming back with a false statement. It thinks it’s correct or it’s trying to convince you it’s correct, but it’s not.

AP: Yep.

RD: So that is very concerning to me, because obviously we want as accurate as possible when you’re doing document conversions. And if that doesn’t occur all the time, I mean if it came back with an accurate example, but let’s say two or 5% of the files that I throw at it, it comes back with fiction. That’s not an acceptable thing because it’ll be very hard to detect. It looked really good until I went back and said, oh wait a minute, wait, where did it get that from?

AP: We have done some experiments and I’m sure a lot of people listening have too. Like I asked for a bio on myself and it told me that I worked at places where I have never worked. So yeah, it’s not reliable. And I think there’s another element here too that scares me beyond the reliability. A lot of these models are training on content that doesn’t really belong to the person who put together the engine that’s doing this. It’s compiling copyrighted content that doesn’t belong to them. I think there are a lot of legal concerns in this regard. I was talking with someone on social media about how you can maybe use AI to break writer’s block. The group, The Pet Shop Boys, the songwriter, and the vocalist of that group, Neil Tennant recently said, I have a song that I tried to write 20 years ago and I put it away in a drawer because I couldn’t finish the lyrics.

I wonder if AI could look at the song and the kind of work I’ve done and help me figure out how to finish some of these verses. Now I may turn around and rewrite them and change them, but it might be a way to break writer’s block. And I see that being a useful thing even for corporations. Put basically all of your information in your own private large language model (AI) that doesn’t leak out to the internet. It’s internal. So then you can do some of the scut work, like writing short summaries of things, seeing connections maybe that you haven’t seen. But the minute you get other people’s content, their belongings, other people’s art involved, it becomes very squishy. And I’m sure there are liability lawyers just going crazy right now thinking about all this kind of stuff.

RD: Well, you certainly see a lot of that in the stable diffusion space, the art space.

AP: Yes.

RD: Where AI is being trained on outside artists’ work and are very easily able to mimic those artists often without their permission. I do think you touch on a very important point there, actually two. One, the fact that anything you type into OpenAI by default is being shared with,

AP: Right.

RD: … OpenAI. And as a matter of fact, Samsung, the company just banned OpenAI for all of its employees for that very reason because they had taken to using it for summarizing meeting notes and things like that, and they discovered very quickly that trade secrets were leaking because of that.

AP: Intellectual property. Not a problem, let’s just share it with the world! Yeah.

RD: Yeah. So actually what Samsung is doing is exactly what you said. They’re making an in-house large language model for their employees to continue to be able to do that type of work using that. The other aspect of what you touched on, which is where I think the real sweet spot is right now, using these tools as a way of augmenting your ability.

AP: Yes.

RD: Especially as a developer, just because that’s my space.

AP: Sure.

RD: As a developer, most developers have a stack overflow of Google when they’re trying to research on how to attack a problem properly. “What’s the best way of solving the problem?” Now you have your paired programming buddy ChatGPT, and you can say, “Hey, I need to update the active directory with this and how do I do that?” And ChatGPT will spit out working code, or even better, I can throw code that is obfuscated, whether intentionally or not.

AP: Right.

RD: … at ChatGPT, and it will produce a reasonable summary of what that code is attempting to accomplish. And that is fantastic. And you see tools like Microsoft Copilot, which they’re doing in conjunction with GitHub. Google also is having a suite of Bard tools for helping you do that and that type of thing is starting to leak into other spaces. So Microsoft Copilot, for example, is now being integrated into Office 365. So it will help you while you’re writing your memo, while you’re working on your Excel spreadsheet, while you’re working on your PowerPoint, rephrase things, come up with a better approach. In Excel, it’s great because it’ll tell you, well, this is the best way of approaching this macro, for example, or this formula and that type of thing is, I think, fantastic.

AP: Sure. And I’m more on the content side of things and we’re seeing some of the similar things that you’re talking about. For example, the Oxygen XML Editor has created an add-on that will hook into ChatGTP, PT. Look at me getting that wrong. I do it all the time. FTP, GPT, sorry.

RD: Too many acronyms.

AP: Too many acronyms floating around here. So basically it will, like for example, look at the content you have in a file and write a short summary for you so you don’t have to do it yourself. That could be a very valuable thing, but again, do you want people in the world seeing or getting input from your content? Probably not. So if you could create your own private large language model (AI) and then turn everything into that, I see there’s a lot of value because it will help for example, a lot of people who are writing an XML, it can help clean up their code like you were talking about. Or you could take some unstructured content and it could do probably quite a passable job of cleaning it up, adding structure to what was unstructured content. So I do see some very realistic uses there that could be very helpful. And do I see these things taking away someone’s job? Not right this second in this regard, but I see it basically taking something that’s not so much fun off their plate so they can focus on more important things.

RD: Absolutely. The most recent phrasing I saw for that is it replaces that junior programmer that most groups have that you’re looking to do scut work. This is the person who’s going to do the eight days of data entry to convert everybody over to a new system or that type of thing. That type of work nobody wants to do, but that’s what junior developers get stuck with.

AP: And that is very true, and there is a writer strike going on right now, and part of the concern with that strike is content that may be created by AI. Now, is AI going to write a really good script right now? Probably not. Could it write something that is the starting point, the kernel that someone can then take and do something bigger with, clean it up? Yes. And that may eliminate junior writer positions. So there is some concern, very similar to what you’re talking about. There is this situation where we have to think about how are people going to get into an industry when AI has taken away the entry level jobs. That’s going to be something very difficult to tackle, I think.

RD: I suspect you’re right. But on the other hand, you end up in this collaborative space where if you do have that writer’s block, like you said earlier.

AP: Sure.

RD: This gives you somebody to bounce ideas off of and have a conversation with about the subject, about the program, about the article, or about whatever, the song you’re trying to write, which is fantastic. Now at DCL we have had some success. We are doing some work where we’re using a large language model to associate authors and institutions, for example, out of documents. And we have great success in that. Usually we can programmatically determine it, but on those fuzzy edge cases, and I think that’s where ChatGPT and large language models fit in is when it’s a really fuzzy edge case that it’s difficult to accommodate for all things. We’re actually using it and having good success at matching authors and affiliations on a consistent basis and double checking the work that we’re attaining programmatically.

AP: That’s great.

RD: For having your own ChatGPT clone, there is a lot of work out there. There’s GPT4All, there’s Mosaic, there’s a bunch of things where you can download a large language model to your local machine and run it and the performance is not as great as this massive monolith that OpenAI has going. But it’s not bad depending on what you’re trying to do with it. It’s not quite as advanced as GPT-4. But the nice thing about the open source community and their approach to this is you’re starting to see people iterating constantly. So Facebook was working on their own large language model and intentionally or not, there’s some debate about that. It was leaked out to the internet and it became this iterative community in the machine learning space where people were constantly iterating on this model, expanding the model, growing it.

You can access it now through Mosaic, you can access it through Alpaca and you can access it through GPT4All, and you can actually have those conversations running completely local with ever leaving your PC. So for those types of things, I think it’s great. Now, is it perfect? No. For example, a very easy test. There’s actually a YouTuber named Matthew Berman who tracks a lot of this, and he has a spreadsheet of about 20 tests he gives any new large language model, and a very simple example is most large language models still fail the transitive test. So in other words, if A is greater than B and B is greater than C is a greater than C? Okay. Or if John is faster than Fred and Fred is faster than Sarah, is John faster than Sarah? A lot of them fail that test. They just come back with an erroneous answer. The other issue you see is a lot of the AI models are not being updated constantly. So they’ll still see it as 2021, for example.

AP: Right. And what you just said kind of reminds me of something. All this somewhat overblown talk AI’s going to take over the world. Well, AI’s not going to take over the world if the content that it’s basically scraping, and I know that’s really simplifying things a whole lot. If that content is not good, it’s not updated, humans aren’t putting intelligence in it, it’s not going to be that useful. We still have to provide the underpinnings for a lot of the intelligence in these systems. So are our brains going to be replaced today? Probably not.

RD: No. But the bar, or I guess the bar is getting lower and lower as time moves on.

AP: Fair. That is fair.

RD: It’s definitely getting better. For example, GPT OpenAI has updated ChatGPT where we’ll now actually go out to the net and get more up-to-date information. It may not have internalized that information, but it will actually perform a web search, extract information that way now and come back with it. And that was released recently. You have now work going into how quickly you can train a model, which is a huge thing. GPT-4 has been trained on 100 trillion parameters, which took weeks and weeks of time to train and to do a new one using that methodology would continue that curve. It would take months to train a new one, but there’s now work being done of, okay, if I have a pre-trained model, how do I quickly iterate on that model so that it doesn’t take me weeks? It may just be a question of ingesting new information on a daily basis, a little bit of news feeds or that type of thing.

AP: Sure. Let’s talk about risk to wrap up here. I brought up the copyright angle. What do you see as a big concern here, your biggest concerns?

RD: So, there’s a couple of things that are big concerns of mine. One, I feel like people anthropomorphize AI a lot.

AP: Yes.

RD: They’re having a conversation with their program and they assume that the program has needs and wants and desires that it’s trying to fulfill, or even worse, that it has your best interest at heart when really what’s going on behind the scenes is this is just a statistical model that is as large enough that people don’t really understand what’s going on, but it’s a model of weights and it’s emitting what it thinks you want to the best of its ability. And it has no desires or needs or agency of its own.

AP: Yeah, I want to make t-shirts, Large language models are not people, so yeah.

RD: The other thing is we’re starting to, and there’s some press about this where we’re talking about — bias.

AP: Yes.

RD: A good example of that or not so good example of that is when you have an AI model that hasn’t been trained for anything but western culture. It’s inherently biased towards American values, American positions on the world. What the AI will spit out may not be culturally acceptable in other places and vice versa. I mean, an AI trained in China is probably not going to give you the same response for things that you care about in America.

AP: Yeah.

RD: You can also, a lot of these companies have inherent rules and there’s actually a game going on. Microsoft’s AI started as a program code named Sydney, and there’s an ongoing game that people who are doing prompt hacking or prompt engineering to try to discover all the rules inside Sydney. It’s things like, well, Sydney will never call itself by Sydney and things like that. And it almost starts devolving to the point where you’re dealing with Isaac Asimov’s three laws of robotics or Robocop’s prime directives, where you have a list of instructions that are overriding the basic approaches that the AI can do. This is probably getting too philosophical for a content program, for a content transformation podcast, but I mean these types of things will color responses. So if you are asking in AI when it’s ingesting a program to emit certain key characteristics, those key characteristics may be shaded by these rules, may be shaded by this training.

AP: And that training came from a person who inherently is going to have biases, right?

RD: Exactly.

AP: Yeah. Yeah.

RD: So that type of thing is a problem.

AP: Yeah. I mean, AI in a lot of ways is a reflection of us.

RD: Yeah.

AP: Because it’s, a lot of times, parsing us and our content, our images, and whatever else. This has been a great conversation. It went some places I didn’t even expect, and that is not a criticism, trust me. So thank you very much, Rich. I very much enjoyed this, and it’s good to have a more balanced kind of realistic conversation about what’s going on here. I appreciate it a whole lot.

RD: Okay. It was very nice talking to you.

AP: Thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

379 views

Comments


bottom of page