top of page

DCL Learning Series

Ready, Set, AI: How to Future‑Proof Your Content, Teams, and Tech Stack – a Comtech/CIDM webinar


Patricia Grindereng (Comtech / CIDM)

In today's webinar, “Ready, Set, AI: How to Future-Proof Your Content, Teams, and Tech Stack,” with Dipo Ajose-Coker with RWS, Marianne Calilhanna with Data Conversion Laboratory, and Sarah O’Keefe with Scriptorium – welcome to you all.


Dipo Ajose-Coker

Thank you.


Sarah O’Keefe

Thank you.


Dipo Ajose-Coker

I'll start sharing now. Just let me know that I am not sharing my email stack.


Marianne Calilhanna

[Laughs] Yeah. It looks good, Dipo.


Dipo Ajose-Coker

All right. Excellent. So, well: "Ready, Set, AI!" Let's go. Let's future-proof your content. Basically, we thought we'd put this together with Sarah and Marianne. We did a similar webinar, well, a similar presentation, at the ConVEx San Jose conference in March. And following the enthusiasm from that, we thought "Let's bring this out. Let's try and get this out to more of our crowd out there." The appetite for AI just continues to grow. There's new developments every day, and there's people feeling "I'm getting left behind." And they want to quickly jump onto that bandwagon as quickly as possible, but what we want to do is to try and help you prepare for that.


You don't want to jump on with jumbled-up content. You want to prepare that content. You want to prepare your teams and your organization so that you can be successful and then not throw it out the window after six months to a year. We're hoping at the end of this session that you'll be able to assess your content landscape, spot gaps in the structure and the governance and findability before AI exposes those. We want to start building an AI-friendly pipeline. We'll be giving some practical steps to help you get on that way. We want to help you manage the change, change management. People are hard. Tech is easy. People are hard. So you want to start trying to change some of the anxiety around that, mitigate the risks, and then we'll maybe try and give you some quick-win scenarios that will help prove value very quickly.


Before we go on, I thought I'd share this with you in that – oh, that's us. Yeah. Sorry. So RWS underwent a rebranding, and it just so happened to fall. I came up on this slide, and it's like "Well, you want to be generating content. You're going to be transforming that content, and you want to also protect your own content." And when you do start preparing your content, if you have prepared it properly, the impact is transformational. You will be able to get real good use out of your AI. You'd be able to improve workflows. You'd be able to generate that content quicker. It'll be more accurate. You can't have an assembly line without machined parts. The machined parts have to be consistent in nature, and they're designed to fit together in a certain number of ways. You can't just mishmash and put them all together.


So that's what we're going to be doing today. We are going to look at how you can standardize those parts, how you can label them, create all that sort of stuff, and put it together so that you can generate, transform, and protect your content. Oh, you've already been introduced to us. I'm just going to quickly skip over this one. Sarah O’Keefe from Scriptorium, Marianne from Data Conversion Laboratory, and myself. I'm with RWS, and I work on the Tridion Docs product.


4:08

Now, just a quick recap. At ConVEx, we thought we'd try this out, and what we did was we put out some LEGO sets, the LEGO Creative Suitcase, and we tried to simulate putting your content. Everyone knows LEGO is that classic metaphor for the power of structured content. They're modular pieces. They're reusable. They're flexible, not as in that way, not when you step on them at midnight, but they're flexible in their use. You can scale the content, and they're built according to a standard. LEGO understood that way, way long time ago. IKEA followed suit with their standardized models that you can scale and build different things out of it. And we gave these sets out, and in some of the sets, we structured, we semantically tagged the content.


What did we do? We sorted by color. We put them into different boxes. And one of the boxes, we just threw everything in, and we actually took the instructions out. The result was so funny. You should take a look at some of the blog posts that we put out on that, and I think I'll try and share that video that we created on there. But basically, what we were trying to do was, even if you have got structured content, if you don't label it properly, if you don't create those relationships between the pieces, then, well, you end up building nonsense. And so, we thought we'd show you the results of having proper structure. You have reusable bits. Those petals that you see or the leaves that you see on the ground, those are actually reusable as frogs.


Thanks, Marianne, for putting this together. They're modular pieces that you can then use to build something else. So here, we've a bonsai tree, but maybe you might be able to build another type of tree on there. And on the right, with no instructions, i.e., no metadata, no industry standard, there's no organization. You've not put it into a CCMS. There's no relationship between the pieces in the metadata, then your AI hallucinates. Who can guess what this is? Answers in the chat, please. Marianne, do you want to speak to this a little bit?


Marianne Calilhanna

Yeah. I've always thought that this new series of LEGOs that came out, they're these blooms, these flower sets. I wondered if LEGO has been listening to all the metaphors in the DITA, in the structured content world, because there's a series of LEGOs that used those pieces. So in this example with the bonsai tree, yeah, they're little frogs, and it was my kids who told me LEGO had all these extra pieces. So they thought "Well, we could reuse these." So I guess this metaphor is sort of going both ways. Yeah.


That left image is, it's a LEGO set with instructions, and kids put all the like pieces together, and then they follow on the instructions, and then, boom, they create this great piece. And on the right, it's a facsimile. It's a reproduction of what happened in real time when we were at ConVEx, where we did provide the CCMS and that we threw all of the LEGO pieces into the LEGO suitcase that Dipo brought, and no instructions. And while everyone was creating their little horses or their little – I forget what else we had. I don't know if any of you remember, like a little knight or –


Sarah O’Keefe

Well, small people, a couple other things.


Marianne Calilhanna

Small people. Yeah. Then one that was just kind of crazy. It was cool-looking, but we were like "What's that?"


7:58

And that was clearly the AI hallucination, because it came from the group who was working with the LEGO set that had all the pieces jumbled. They had no instructions. When we set the scenario and asked folks to create something, they kind of looked up, like, "Well, there are no instructions. What do we do?" And they saw everybody else putting things together, nice and tidy and organized, and they were really scrambling, and boy did it really capture this conversation that we're about to have, that we've all been having for quite some time. Yeah. And so, Sarah, when we're talking about preparing content for AI, what does that mean? Talk to us about, what does that mean when you're trying to organize that content to you?


Sarah O’Keefe

So remember that AI is looking for patterns. And so, the big-picture answer is that if your content is predictable, repeatable, follows certain kinds of pattern, and is well-labeled, then the AI, if we're talking about a chatbot extracting information, will perform better. So the big-picture answer to how do we make sure that the AI works is all the things we've been telling people to do. Structure your content. Have a consistent content model. Be consistent with your terminology and your writing and how you organize your sentences and your steps, and your this and that, and put metadata, taxonomy. Put a classification system over the top of it in the same way that you would sort these blocks by color or size or function, or all of the above. Right?


One of the great advantages of metadata is you can sort on two axes or three, or fifteen. But the thing to remember, just as you move into something like this, is that AI, with its pattern recognition and its machine processing, and you touched on this, Dipo, when you said machined parts have to be consistent. AI is going to expose every bit of content debt that you have. Every case where there's an edge case, where something's not consistent, where you didn't quite follow the rules, it's going to think – it doesn't think. It's going to think "Oh, that's significant." And it's going to try to do something with it. So think about the distance between your ideal state content, which, of course, we will never get to, but your current state content, and how do you close that gap? How do you make that gap as small as possible so that the machine, the AI, can process your content successfully?


Dipo Ajose-Coker

Marianne?


Marianne Calilhanna

Yeah. Just one other thing I want to add with this conversation. We talked about modularity, reusability, interoperability, and standards. We have these standards in place across our industries for managing content, and it supports all of this that we're talking about. So that's great, because you don't have to start from scratch, and an example would be DITA. Probably most people here are familiar with that term, but DITA is a standard way of tagging and structuring your content so that the supporting tools are there and understand that language as well as the large language models.


Dipo Ajose-Coker

Yeah. The fact that it's standardized means that toolmakers, people who are creating software, who are training LLMs, can have that standard structure, the language that this means this. And that way, when you feed it in, you get a consistent sort of output.


12:05

If you want to avoid chaos, you want to maybe think about relationships between the elements and how you organize the content within that system that you're putting it all into. Marianne, talk to me a little bit about this.


Marianne Calilhanna

It was funny. The other day, I was doing something outside of work. I was working on a website for something else, and I kept running into a problem, and I tried to search through the help files. Couldn't find the answer, and I was like "Oh, no. I have to resort to the chatbot. Well, here I go," and I had a fantastic experience with the chatbot. I hate to say this, but it was probably the first time ever, and I thought – and we've been talking about chatbot. We talk about how structured content helps with this.


But for the first time, I was like "Wow, problem, question, answer. "Just flawless. And all I could think about is "Boy, I want to ask them what they're doing behind the scenes." I was completely fascinated, because when you have your content, your knowledge structured, when you have the metadata, when you have those relationships identified, that supports the AI to understand those relationships, to improve the contextual responses, and ultimately, it gives a great user experience, and that's what probably everyone here on this webinar wants.


Dipo Ajose-Coker

Yeah. I think one of the things that I try and use to prove that – you have to establish those relationships first, because otherwise, you don't know what you're talking about. I say "Who is your brother's uncle to you?" Your father's brother. I gave it away there, didn't I? "Who is your father's brother to you?" I overthought it. But basically, I mean, who is your father's brother? Is your uncle. How did you learn that? Well, when you were growing up, we established these relationships.


And if an alien came in and landed on Earth and pointed to that person and asked you "Who is that?" you'd say "Well, my uncle, Ralph." There's just no other way. There's no logical relationship between why you would call that person and that uncle. It's just basically an established standard, and those standards have been like, it's translated into all the different languages. Sarah, if you think of CCMS, do you think a CCMS will solve all of our problems?


Sarah O’Keefe

Oh, of course. I mean, absolutely. Also, I mean, it's worth noting that Father's brother is not the same word in every language as Mother's brother. Right? So even that example, there's some nuance in there, which is kind of interesting. CCMS. So a CCMS is basically the case here. Right? It's the container that you can sort all of your LEGOs in. Now, it is perfectly possible to purchase a CMS, to set up a CMS, or sorry, a CCMS, or a CMS, and dump all the LEGOs in without sorting them. Right? I mean, just having a CCMS does not give you this lovely classification system that we've established here. So, “necessary but not sufficient” is probably the answer we're looking for. Arguably, you can make an attempt to classify and structure your content without a CCMS.


16:00

It's a tool that helps you enable it and do it more efficiently. But, I mean, this is going to be my refrain for the next twenty years. You still have to do the work. Right? You have to put in the work before you can leverage the machine or the software or the automation.


Dipo Ajose-Coker

Perfect. So we've been talking about strategy here. So what are the tactics that you want to employ here then in preparing for AI? Sarah, do you want to go with this?


Sarah O’Keefe

Yeah. So you have to do the work, and then risk mitigation is the other thing that people are thoroughly sick of hearing me say. Right? You need to put the content in a repository. Okay. But if you still have eighteen copies of the same or the same-ish piece of content, and then I, as an author, search for that content, I'm going to find one of the eighteen copies, and that's really bad. Right? So you have to find those duplicates. DCL, by the way, makes a lovely product that can help you do this. So you have to find the duplicates. You have to get rid of the redundancy, because that decreases the total amount of content that you're working with, which is helpful, both to you and your daily life as a content creator, manager, author, whatever. But also, again, fewer parts, more consistency.


The silo issue, not to get too far off the general topic, but one of the big issues that we're seeing now is an increasing interest in structured content for learning content, which tends to be in its own silo away from the tech comm content. So how do we bridge that? How do we break those apart? Do we combine them? Do we put everything in a single location, in a single storage, or do we find some way of cross-walking from, let's say, the CCMS to the LCMS, the learning content management system? And then how do we make all of that searchable? So, again, if I'm searching for a particular piece of content, but I'm searching the wrong repository, and it doesn't turn up, and then I write it again, and now we have duplication. So all of these things tie into having a much better understanding and much better control over your content universe as an author or as a content creator.


Marianne Calilhanna

Yeah. And when you're taking the time, you're starting a project like this, and you need a starting point, well, how do I even begin to tackle this? It's a trite saying, but you don't know what you don't know. So if I'm in one department, I don't know what David did over there, but it's true. We have a tool called Harmonizer, and we love seeing the looks on customers' faces when they're gobsmacked, like, "I had no idea that we had this many versions," or "Oh my gosh, everything was right in eight of these versions, except one had a near fatal instruction over here." And just you don't know unless you do that inventory. It's like another metaphor. You're moving to a new home, and you have to pack up everything, and you get all the glasses that you're going to move, and you're like "Why do I have fifty-six pint glasses for a family of four? Let's get rid of this. Let's clean it up." It's a pretty profound experience, and you end up feeling like you feel refreshed and like "Okay. Now I can start this massive undertaking and know that I'm doing it in an organized way."


19:58

Dipo Ajose-Coker

So talking tactics, you want to talk to people who have that experience of helping you to classify and know how to structure your content, the model that you want to use, and then you want to use services that will help you identify, detect those duplicates, help you make those decisions as to whether or not to have an extra copy of something, because maybe there is a reason why there are two warning messages, and one is because it's always for an older copy of the software, and the new one is for version six onwards, and things like that. Sorry, Sarah, you were going to say?


Sarah O’Keefe

Well, the moving metaphor is a great one, because, A, you discover you have fifty-six pint glasses. And thanks, Marianne. I feel a little bit attacked on that one for no reason. But you throw away a bunch of them, and then you move, and then as you're unpacking, you find thirty more, and you're like "Ugh." And then you keep throwing things away. So it's like an ongoing battle against those glasses.


Dipo Ajose-Coker

And then you have that dinner party, and then you find out that you threw away too many of them, or you threw away that special one, the one that was from Auntie Edna, who wanted to see it, and you're having Auntie Edna around. You just threw it away, all of that sort of stuff. Let's move on. Come on. Change over. So metadata, your instruction manual, sort of, in a way. Marianne, talk to me about this.


Marianne Calilhanna

Yeah. Okay. We're probably throwing out too many metaphors, but nonetheless, I'm going to throw out another one.


Dipo Ajose-Coker

I love them. I love metaphors.


Marianne Calilhanna

But I always think of metadata and taxonomies when you're talking about governance and everything that goes into knowledge management, content management. I think of it as an iceberg. You've got all this visible stuff, content that your employees see, content your employees use, what your customers are searching for, but then underneath is a even larger ecosystem. It's the larger part of the iceberg that supports that top part. And when you think of metadata and taxonomies, I think a lot of people think "Oh, I'm done. I've tagged all my content. I've got this taxonomy. I'm finished with my knowledge management."


But I always advise shift from that mindset of being finished, because you're never really done. Language is living. Industry terms change. Were we using large language models in the '90s, LLM, that term? No. So you have to just always iterate through your knowledge management, your content management, and make a point to revisit it in whatever time frame is relevant to your organization, your industry. So those are some of my thoughts about metadata, taxonomies. Sarah, what do you think?


Sarah O’Keefe

Well, nobody likes governance. Right? Governance is the sort of dirty work of keeping everything under control and having processes and having rules, and ensuring that the content that walks out the door is appropriate and compliant and ties right back to the previous slide, which talks about risk. So I think, Marianne, you've covered all the key things. What I would say is that your governance framework needs to match your risk profile. Right?


Marianne Calilhanna

Mm-hmm. Great point.


23:58

Sarah O’Keefe

And canonically, we always talk about medical devices as something that has very heavy compliance and also a lot of risk, because if a medical device is not configured correctly, if the instructions aren't right, if the either medical professional or end user, the consumer, misuses it, it could have some dire effects, by which I mean dead people. Right? So your governance framework needs to match up with the level of risk that's associated with the product or the content that you're putting out the door. And so, if it's a video game, then that's my canonical, doesn't-need-a-lot-of-governance example, except a couple of things. All our video games have warnings at the beginning about flashing lights and epilepsy.


And also, video game players, gamers tend to be very, very unforgiving of slow content. Right? There's a wiki somewhere. It's got all this documentation in it, and they'll update it and make changes. So the governance isn't really there in the sense that people can do it themselves. But if you were to tell them "Oh, it'll take us six months to put that update in," that would be totally, totally unacceptable. So your governance is going to depend on the type of product, the type of content, the level of risk, the types of risk, and you need to take that into account.


Marianne Calilhanna

Yeah.


Dipo Ajose-Coker

And I'll just add on here that you could have all the rules in the world. If you've got no way of enforcing it, then you might as well just have written it on a piece of paper and put it on the back shelf. You need a tool, and this is like, I've got to talk about the CCMS part of it that is able to help you enforce the rules. The standard helps you enforce the rules. You can create that model, but if you say "These people are not allowed to change it" or "You can only change this"; "You can only duplicate this content in this particular scenario," having a tool – there's no one sitting behind every writer saying "Naughty, naughty, naughty. You shouldn't have duplicated that."


However, if the tool is able to stop you from duplicating that content, and you want to balance automation with human quality assurance. So you've got the tool that is going to stop you, but maybe it's just going to prompt you or send a message to that manager, saying "This content has been duplicated. This content should not be duplicated. We prevent you from using this in this particular manual, because the metadata tells us that it's not applicable."


Marianne Calilhanna

Hey, Dipo, we did have a question come through. Were you going to say that?


Sarah O’Keefe

Yes, I was going to say the same thing. Go for it.


Marianne Calilhanna

And I think it's relevant to talk about you bringing up tools. Someone asked about, just to clarify what we mean by interoperability. So bringing up the CCMS is a good example. Maybe one or both of you could comment on interoperability, sort of explain that, make sure we're all on the same page here.


Dipo Ajose-Coker

Yeah. Interoperability. First of all, the standard that we are pretty much all talking about here is DITA, and DITA is designed in a way that you can use it with different other XML. You can easily translate it and match it, create a matrix. But also, you want your CCMS to be able to connect to other tools and take information from other databases. One particular example that I see that is happening in the iiRDS world, that's another standard. That's another XML standard that is used to class parts. And so, in the automobile industry, they were really hesitant. In Germany, they were really hesitant to move into DITA, because they had these vast databases and vast systems that classified all their parts and everything.


28:00

And they did not know how to connect it. And iiRDS was put together to help create that standard language for DITA systems to connect to and understand what's coming from a parts system. So interoperability is your system being able to connect and exchange information intelligently and easily with other systems that you might be using within your organization. Sarah?


Sarah O’Keefe

Yeah. No, I think that covers it. I mean, ultimately, there are some infamous tools that are not particularly interoperable. I'm thinking of Microsoft Word, InDesign. Usually, when we start talking about interoperability, we're talking about a couple of different things. One is, as you said, DITA itself, which is a text-based thing that we can process, so machine-processable. But also, is the place where you're storing your content accessible? Can we connect into and out of it? That usually means, is there an API? Is there an application programming interface that allows me to either reach in or push out the content to other places that it needs to go? And I would say that there's a lot of work to be done in that area, because our tools are not as cleanly interoperable as I would like.


Dipo Ajose-Coker

Yeah.


Marianne Calilhanna

Yeah.


Dipo Ajose-Coker

And actually, if we're talking about AI – sorry, Marianne. If we're talking about AI, there's an interesting buzzword, buzz term that is coming out, and that's MCP, and this is this middle language, middle standard that is coming in. I think it was Anthropic that put it up. It's Model Context Protocol, and it allows what – everyone is talking about agentic AI. It allows your LLM to interact and talk to any clients that are being built. So loads of people are building these little clients to help you write stories or help you create an image, and then it has to connect to a large language model. When that new model is created, all the developers have to go and change their code and all that. MCP stands in that middle bit and allows the interoperability between large language models and client AI applications.


Sarah O’Keefe

Yeah. And there's a question related to this, which I think I'm going to pick up. Basically, the poster says "My dev teams want all the content in Markdown for AI consumption. Metadata and semantic tagging is stripped out of our beautiful XML." So yeah, this is a huge problem. We've got a couple of projects that are – look, so to the person that wrote the question, it could be worse, because we have customers where the dev team or the AI team actually is requesting PDF. So as bad as you may feel about your Markdown situation, it actually could be a whole lot worse.


But ultimately, this is a problem around its interoperability, because the AI building team didn't really think too carefully about what's the input that we're going to get. Right? And you could go to them and say "I have this amazing DITA content, and I can feed it to you in all sorts of ways with taxonomy, with classification, with everything." And they say "Cool. Give me PDF, or strip it down to HTML," which is at least better than PDF. And even your Markdown example, I mean, it's not great, but it could be so, so much worse.


32:00

So this is a problem, because if we, as content people, are providing inputs to the AI, then we need to be stakeholders in how that AI is going to accept the content, and not just be told "Give me PDF," and walk away. There's a related question about best medium to feed LLMs, and the answer is, of course, it depends, although I'll let the two of you jump in. But I would say that if you're starting from DITA, if you're starting from structured content, then probably you're looking at moving your structured content into some sort of a knowledge graph and using that as a framework to feed the LLM. That would be my knee-jerk, context-free answer.


Dipo Ajose-Coker

Yeah. And basically, that just segued us into this slide. Training your writers. AI is not going to fix your bad input, and then you've got to talk about IP, intellectual property, copyright, audit trails. Let's dig into this a little bit. Building something meaningful. How do you build something meaningful? Sarah?


Sarah O’Keefe

Right. So garbage in, garbage out. I've come up with a couple of other acronyms that go around this. But again, you have to do the work. You have to have good content. You have to have content that is relevant and contextual and structured and accurate. Right? And one of the key reasons, I think, that we're running into this "Oh, just let the AI write all the content" problem. Right? This is kind of like anyone can write 2.0. The AI can write. Cool.


One of the problems, one of the reasons this is happening, I think, is because at the end of the day, there's a lot of really, really bad content out there. Right? When we say "No, you need content professionals," and the C-level person is looking at their content, saying "But what I have is not good," I can have the AI produce "not good." Right? It can be equally not good, and it's fast because it's a machine. So we have to create useful, valuable, insightful, contextual content so that you can build an AI over the top of that to do interesting things and not resort to generative AI to just create garbage.


Marianne Calilhanna

Yeah. And the hyper-focus too. So someone who's very specialized, maybe a researcher looking for advances in CRISPR technology for pediatric oncology. I'm just kind of making that up. But you want to make sure that you have a system, an environment that is looking just at the literature that you want to do for the research. So that's a great example, where structured content, maybe combined with RAG, is going to make sure that you stay within that specialized subject area that you want to focus on. That's really critical for you.


Dipo Ajose-Coker

Yeah. As you were talking, I was thinking about that old analogy. If you give a thousand monkeys a thousand typewriters, they'll eventually come up with the works of Shakespeare. But in the meantime, you're going to be reading a whole load of gobbledygook.


Sarah O’Keefe

Yeah. The version of that I saw was a thousand monkeys and a thousand typewriters. Eventually, they'll produce Shakespeare. But now, thanks to the internet, we know this is not true.


35:56

Dipo Ajose-Coker

[Laughs] Okay. So, well, structured content is the foundation. We've just established that. It turns the potential of your AI into something that can be performant. What else is involved in here? Structured content fuels your AI. Marianne, talk to us about this a little bit.


Marianne Calilhanna

Yeah. I mean, I think we've sort of beat this to death, and anyone who's talked to me has probably heard me say so many times, structured content is the foundation for innovation. It's the starting block. And I think it's also, when you talk about the kinds of organizations with whom DCL works, RWS and Scriptorium, they're also working at scale, so large volumes. So that's also when you need to shift to this way of working and this way of thinking, because to enable automation, to enable intelligent reuse at scale, large volumes, that's really when you also need to consider the move to structured content so that you can deliver things without that manual intervention. I can have that great chatbot experience that I've never had in all these years, because I know behind that, there's modular, tagged content that is just hyper-focused to what I needed, to my problem.


Dipo Ajose-Coker

Yeah. Basically, you're able to, without having to retool everything, deliver to the different channels. There's no need to rework it and say "We want to create a PDF this time. Could you rearrange it?" The metadata behind that allows the AI or whatever tool that you're pushing it into to understand that this is going for a mobile device or this answer is for a chat. This answer is going to the service manual who has this level of qualification. All of that is what allows you to then be able to scale and say that "We'll create that content once, and we can just easily push it out when we update it. We can push it out to whatever channel that we need to."


If you always have to think that "It's going to take us three weeks, because we put a new comma in," to then get it all out there, project managers are going to say "No. Forget it. We'll wait for the next big update." I'm sure half of the people in here have heard that phrase "Let's wait for the next big update before we make those changes." If you're able to make a tiny little change and push it out automatically at scale, this is that magic spot that you're looking for. So what's blocking AI readiness? Sarah.


Sarah O’Keefe

It's always culture. It's always change resistance. Those others are interesting. Yeah. I mean, these are the three. Right? But ultimately, change resistance, and we're seeing – I mean, we've already seen a couple of comments about this in the chat, about the AI team is building out something that's incompatible with what the content team is doing. And so, why is that conversation not happening? Right? Well, because it never occurred to them that there were stakeholders. They don't think of content as being a thing that gets managed. It's just like an input, kind of like, I don't know, flour and sugar or something. So change resistance, organizational problems, organizational silos. Right? When we talk about silos, a lot of times, we're talking about systems. Right? The software over here and the software over here can't talk to each other. But more so, the people over here and the people over here refuse to talk to each other.


Marianne Calilhanna

Yeah.


39:58

Sarah O’Keefe

And when I say "refuse," in many cases, they're incentivized not to talk to each other, because their management, their upper management, they don't talk. They don't whatever. They don't collaborate. There's some competitors. Have you seen those environments where the two groups hate each other? "Oh, no. We don't talk to them. They're terrible, and they live in that state over there that we don't like," and "Oh, they're in –"


Dipo Ajose-Coker

Yeah. Marketing gets it all the time.


Sarah O’Keefe

They're in Canada or they're in the US, or they're in France. "I don't even want to – they're in X location, and I've heard a lot of them, and how those people are." It's like "Oh my lord, you work for the same company."


Marianne Calilhanna

And today, with global organizations working in hybrid or in a remote capacity, you're not even going to bump into those people getting a coffee like you used to in the old days when we were all in an office together or taking the same train to work. We got a question in the dialogue box that made me think we missed a bullet point here, and it's convincing management. So it's money. Right? So that's another thing blocking this, is dedicated funding to work a different way. And how do you convince management to do that? Great question.


Sarah O’Keefe

Yeah. Getting the business case is really, really important. And there's a number of problems there, but the big-picture problem is that content people in general are not accustomed to or talented at, take your pick, getting large dollar investments for their organization. They're sort of like "Oh, we're always last. We never get anything. We're over in the corner with no stuff." And when we start talking about structured content at scale and these scalability systems and an assembly line or a factory model for content and for automation and content operations, well, those are big dollar investments. Right?


Marianne Calilhanna

Yeah.


Sarah O’Keefe

And that's setting aside the question of expensive software. I mean, the software is not cheap, but that's not the issue really. The issue is this change, changing people into where they're going and how they're doing this, and how their jobs evolve, and needing to not just put your head down and write the world's greatest piece of content, but rather "Oh, what? Marianne wrote this last year, and I can take it. If I modify one sentence, I can use it in my context also. And now we have one asset, and we're good instead of making a copy because I don't like the way Marianne wrote it. So I'm going to rewrite it in my voice." That type of thing. Right?


So AI, again, is going to require us to think about our content across the organization and across the silos, because at the end of the day, the AI overlord, the chatbot that's out there slurping up all this information and regurgitating it, it does not care about the fact that I work in group A and Marianne's in group B and Dipo's in group C, and we don't talk to each other. The chatbot, the world, the consumer sees the three of us as, if we're all in the same company, "They're all part of the same organization, so why shouldn't it be consistent?" And they're not wrong.


Dipo Ajose-Coker

Yeah. I mean, I did actually do a presentation on building your business case for DITA, and one of the things I said is content operations needs to get away from that mindset that they've been put in, that they're a cost center. They're actually a revenue generator, and they're one of those final deciders.


43:57

And if you think of any company, we get, I mean, bids from different companies, and one of the things they want to see is the documentation. I tell you, when I'm looking at buying a new water pump, because I just got flooded, I'm going to compare everything, compare the prices, go to all the review sites. And in the end, I've got two or three choices, and then I'm going to go and look at the documentation and see, how well-written is it? Is there something in there that will help me make that final decision? And let's say nine out of ten times, there is something in one of the documentation that's going to help me with that final decision. We're sort of running behind a little bit here. So let's start taking a hard look at it.


Marianne Calilhanna

Oh, boy. Yeah. Always.


Dipo Ajose-Coker

Yeah. Hey, we're talkers. So AI readiness. Are your blocks sorted? Before adopting AI, are your blocks sorted? That's the kind of question. What are the things that you need to – okay. We've talked about it, but I just want us to summarize it on this slide.


Dipo Ajose-Coker

Marianne?


Marianne Calilhanna

Yeah. I mean, I think we've hit everything here. Governance and structure. We did miss, again, that executive buy-in. I keep going back to that question. And we joke now that for organizations looking to adopt a structured content approach, to get that executive buy-in, just slap onto your management team, like "We need it to enable AI." AI is that magic word –


Dipo Ajose-Coker

Yes. That's the magic word now, isn't it?


Marianne Calilhanna

– that will open the wallet. Yeah. But then that allows you to do the real things that are listed here, educate and align. At my company, we've started a bimonthly AI literacy lab where we'll watch a fifteen-minute video around a topic on AI. It doesn't even have to be relevant to us, but then we have a conversation, and boy is that sparking just – it's sparking communication across all our different teams, and it's getting us as a company thinking about so many different things in the vast AI world. But yeah, again, I'm going to be just keep saying again and again –


Dipo Ajose-Coker

Sarah, anything to add?


Marianne Calilhanna

– structured content is foundational.


Sarah O’Keefe

No, I think we've covered it. What else we got?


Marianne Calilhanna

Yeah. Yeah.


Dipo Ajose-Coker

Yeah. So I love this one, and then this is huge. Totally.


Marianne Calilhanna

I think this is really important.


Dipo Ajose-Coker

Sarah?


Sarah O’Keefe

Yeah. Yeah. Take a look at where you are. Right? Many, many, many organizations are down in that siloed bucket, and there's a more detailed explanation of this, but this is a bog-standard, five-step maturity model, and you really just want to think about "How integrated is my content? How well is it done? And so, is it in silos that are not connected? Am I doing some reuse for maybe taxonomy and talking at the enterprise?" We've unified our content. We're managing our content, and content is considered strategic. Right? That's kind of the big picture of what we're looking at here.


Now, you do not want to go from level one to level five in four weeks. Very, very bad things will happen, mostly to you. So whichever level you're at, start thinking about "How do I move up one level? How do I make that improvement?" – make those incremental, reasonable improvements as you're in flight with your content? Because almost certainly, you can't throw it away and start over. If you're in a startup and you're brand new, then congratulations, because you can kind of pick a level and say "This is where we need to be for now for our current size, our current company maturity." And think about what it looks like to move up as you go. But really, really think about, honestly, about where you are on this and what you can do with it.


48:02

And then a content audit, understanding what you have, both on the back end, stored, and on the delivery front end, can be very, very helpful to figure out what your next step needs to be.


Dipo Ajose-Coker

Yep. And then you consult your experts, and it's an ongoing engagement. What are those steps? We are here for you to speak with, and we'll give our contact details out. But if you want to look at your content strategy, talk to Scriptorium. If you want to then start, you've talked about the strategy, you've set up your model, then you want to start that migration and start detecting those duplicates and start applying that strategy to how you deal with that content, how you tag it, then DCL is there for you.


And then if you like looking for the content solution, do I want this type of CCMS? Do I want it based on this standard? Then, well, you come to RWS. And so, together, it's like that process. You audit your strategy and then your implementation all the time, and get us all to talk to each other, and that's why we thought it'd be great having all three of us in here, and don't create silos. Bring us all together. Get us to talk to each other rather than talk to one without letting them know where you want to go or where you've been.


Marianne Calilhanna

DCL stands for Data Conversion Laboratory, and we've been asked to convert this content. Sure. But there are many times when people have come to us, and it's like you really would benefit speaking to Scriptorium or to a strategic organization, and we much prefer working in this order, because we know that when it's time to convert that content, to migrate to RWS, it is going to go smoother for everyone, most importantly the customer. But we can trust what the information architects at Scriptorium have identified. We know that we have a very clearly defined target for that conversion, for that migration, and then we know it's going to seamlessly go right into RWS. So I just can't say that enough.


Sarah O’Keefe

Yeah.


Dipo Ajose-Coker

Yeah.


Sarah O’Keefe

And to this slide, there's an interesting point here, and we want to be careful. It's not that you cannot do AI with unstructured content. It is that structured content means you're going to have more consistency, more predictability, and essentially better machined content parts that you're feeding into the AI assembly line. And so, hypothetically, you can use unstructured content and feed it into the AI. The problem is you have to do way, way, way, way, way more work to get the AI to perform.


And I don't know about you, but, I mean, every day, there's another example of ChatGPT churning out inaccurate information. If I fed it better information or if it – not me – if it consumed better information, it would have better results. So structure means that we are enforcing consistency and enforcing all these things, and we can get taxonomy in there, and therefore, we can do a better job with AI processes. So that's what we're saying here, or at least that's what I'm saying here.


Dipo Ajose-Coker

I mean, yeah, why are we getting hallucination? Well, AI, all the large language models, were trained on unstructured content in the most part.


52:00

It's like the whole hoovered-up books and everything, not really structured it, and it's able to make up things. Imagine if it had only been trained on structured content. The answers would be better. So, well, I think we've come to the end here. We said we'd try and leave a little bit of space, one, for you to contact us. So if you would like a copy of the slides that we're using, you could write to any of us. Our email addresses are up there. Get in contact with us. We'll be happy to send the slides. Set up a conversation with us. If you would like all three of us together, we're quite happy to do that. Come into your organization and talk to you, get the right experts in to guide you along your journey. So questions and answers, Q&A session. Trish, what we got?


Patricia Grindereng

Well, we've got a couple here. I do want to remind our attendees that we will send out a recording link to all those who registered as well as I will include in the email everybody's contact information. So perfect. Just a reminder, the Q&A, not the chat for any of your questions. Looks like they're very interesting. "Is there a benchmark on how much energy processing is needed for AI to work through structured versus unstructured content?"


Sarah O’Keefe

That's a great idea. Not to my knowledge.


Marianne Calilhanna

That's a really good question. Yeah. We do know, of course, that AI uses a lot of energy and resources, but I talk with my colleague Mark Gross about that a lot. He was a former nuclear engineer, and he's a pragmatic person and always mentions "Well, the energy resource issue will catch up." AI is going so fast over here, and we know that this is an issue. The processing is going to get better over time. But I would love to see a benchmark like that as well. I'm going to start looking for that.


Dipo Ajose-Coker

Yeah.


Patricia Grindereng

Another one, and we may run out of time. And by the way, should you have any questions we don't get to, please reach out and contact Sarah, Marianne, or Dipo. So, "Are there studies that prove definitively that structured content improves accuracy with LLMs?"


Sarah O’Keefe

Also a great idea.


Dipo Ajose-Coker

Yeah.


Sarah O’Keefe

So, again, not to my knowledge. Also, it's actually a very problematic question, because there's the question of structured versus unstructured and consuming. But let's say it's the same exact text, but one is a Word file and one's DITA topics or something like that. That never happens. What you then have to tease out is, when we move this to structured content and we fixed all the redundancy and we improve the consistency and we fix the formatting inaccuracies and all those things, how much of that plays into the improvements that we may or may not see? So another great question. I don't know if we have any academics on the call, but if we do, I would challenge them to go look into that, because that sounds fun.


Dipo Ajose-Coker

Yeah.


Patricia Grindereng

Well, it looks like we've run out of time. Great discussions. Hope that you will join us back again at CIDM Webinars. And for that, I'll say goodbye, and thank you so very much for all who attended, and our panelists.


Dipo Ajose-Coker

Thanks so much for hosting us.


Sarah O’Keefe

Thank you, Trish. Thanks, everyone.


Marianne Calilhanna

Bye. Thanks, everyone.


Dipo Ajose-Coker

Thanks. Bye.


Patricia Grindereng

Bye-bye.



bottom of page