top of page

DCL Learning Series

Using the Process Data Module to Build Intelligence & Maintain Legacy Functionality

Marianne Calilhanna

Hello and welcome to the DCL Learning Series. Today's webinar is titled "Using the Process Data Module to Build Intelligence & Maintain Legacy Functionality." My name is Marianne Calilhanna, I'm the VP of marketing here at Data Conversion Laboratory. Before we begin, I wanted to let this webinar is being recorded and it will be available in the on-demand webinar section of our website at dataconversionlaboratory.com. We will save time at the end of this conversation to answer any questions you have. However, feel free to submit anything that comes to mind, questions, comments, via the question dialogue box.


Before we begin, I want to quickly introduce Data Conversion Laboratory or DCL, as we are also known. We are the industry-leading S1000D conversion provider. We offer services and solutions that involve structuring content and data that support our customers' content management and distribution endeavors. Increasingly, we help our customers prepare to be AI-ready. DCL's core mission is transforming content and data into the formats our customers need to be competitive in business. We believe that well-structured content is fundamental to fostering innovation and foundational for your own AI initiatives.


I am thrilled to introduce today's speakers and my colleagues, Naveh Greenberg. Sorry, Naveh, it looks like I gave you a new title. Naveh is not president, but he is director of U.S. Development for Data Conversion Laboratory. And Naveh is a PMI-certified project management professional. He specializes in conversions for DCL's defense and tech doc business units and has been instrumental in developing DCI's data and S1000D conversion software.


We also have Chuck Davis. Chuck is DCL's S1000D and IETM subject matter expert. Chuck has more than 20 years of experience in aerospace tech data and tech data management. He works with our commercial and military clients. Chuck wrote the S1000D Aerospace Business Rules for the U.S. Coast Guard. I was on the integrated product team for the development of the S1000D Business Rules for the U.S. Air Force.


And we have Jeremy Love. Jeremy is a project manager and software developer here at DCL. Jeremy deeply understands the tactical approach to implementing S1000D and the strategic benefits that brings to organizations. Welcome, gentlemen. I'm going to turn it over to you for today's conversation.


Chuck Davis

All right. So, hey, everybody. Thanks for joining us today. So, the process data module, I guess that's why we're all here. So, what is the process data module? That's one thing that seems to come up in conversation a good bit in discussions that we have with different clients. And really, the best response or the best definition that I've seen came from Mike Ingledew at Tech Data World.


4:03

"It's where tech docs meets programming." It's an interesting concept. We used to use the old flowchart process flow diagrams when we were doing maintenance data or maintenance processes. Next slide. 


Naveh Greenberg

And Chuck, just a question. We see a flowchart with "yes" or "no." Do you see, is process data module similar to fault isolation? 


Chuck Davis

It is, to a degree. Fault isolation would give you different options, yes or no answers to different options. Whereas, a process data module, you may be given a chance to input data into a field and then dependent upon your answer then it would drive you to your solution in that way instead of having to sort through multiple yes-no answers and things of that nature. Okay, next slide. So, with the process data module, so, what does that mean for us? What does it give you?


So, with the process data module, you want your maintainers and the end user to be able to use your IETM and be able to access their content quickly and make informed decisions. And that way, when you have that, you get cost savings in the end. That's the bottom line. But also, with the process data module, it definitely enables significant content reuse, so you're not having to reauthor data. And so, that's a basic explanation, to a degree, I guess, of the process data module. Next slide. So, I wanted to give a metaphor, I guess. I'm sure some of you in the crowd can relate to this picture. When you get ready to go on a trip, you would access MapQuest and you'd print out a set of directions. And that's how we would get from point A to point B, which it worked.


However, what would happen if you got to the third or fourth set of directions and there was a detour or road closure, things of that nature? And then, you were stuck, you didn't know exactly where to go? Next slide. So, I guess, the metaphor I wanted to give is, with the process data module, I was thinking about how I could relate it to everyone. And I guess, in my opinion, similar to your GPS and Waze. And so, now, with Waze, when you log in, it asks you "Where do you want to go?" And so, that's the first step. And so, basically, like, what maintenance action am I about to take? And then, as you're going through your trip, you may come across certain things and you may report something.


7:56

And so, if you report something, then traffic patterns or things of that nature or the directions may change for the next drivers, depending on the information that's provided. Next slide. So, going along with that example, here's a process data module that we worked on. And here, we're being asked, what are we getting ready to do? And so, you have to make your choice, which is, again, just like the GPS, and where are you going? Next slide. So, with the process data module, as you go through the different maintenance actions and steps, it'll step you down through it. Here, you can see it would be on step three, but it shows you, like follow along in the IETM where you are. Next slide. Here is an example of where it's asking you for input. And so, this would be not exactly but similar to when you report something in GPS and it changes things for the people behind them in GPS. However, this, it's a real-time example or a real-time question. And so, depending upon your input, it would change the direction you would go. Does that make sense?


Naveh Greenberg

Chuck, just a question. So, if you input the data in real time, how does the XML know what data you put in? Because you can put in anything and it's obviously not, so – 


Chuck Davis

And Jeremy, step in if I've missed something here, but it's, really, there may be a list of variables that would be acceptable answers here. If it's a span of numbers, if you're a value on PSI or things of that nature that you're having to input. But depending upon what's programmed in that as acceptable answers there, that's how it would determine where you would go. So, here, it's showing an example of – they're in the middle of maintenance action, and they may have accidentally clicked something somewhere else on accident or gotten distracted and hit something. And it's letting them know, "Hey, you're already in the middle of something, you got to finish this first before you can move on to something else to finish out this maintenance action." Next slide. And then here we are where they have gotten to a spot where there's a required input, like what we saw before. And they may have hit next or okay. And it's letting them know that they've got to put something in there, they can't just move on. That error message would really be dependent upon your viewer and your rules as far as what was displayed when there's a problem.


12:00

And so, that's an example there. Next slide. And then, again, back to the GPS mindset. You process data modules complete, so, you've arrived at your destination. And so, you can finish here. And then, if you finish, then it may ask you, was there something else you wanted to do now? Depending upon the setup or what you were working in, what platform. And so, that's just a real quick explanation, I guess, of the process data module. Naveh, I guess, I'm going to turn it over to you now to give a little bit of background.


Naveh Greenberg

Thank you, Chuck. So, we are going to discuss a few projects where the legacy format was an IETM and the target was S1000D. The two main goals of the project was to ensure that all the data was accurately migrated from the legacy format with no data loss during the migration process. And a very important but challenging task was to maintain the functionality of the legacy IETM. And I assume that when you think about legacy conversion to S1000D, especially when process data module is involved, actually, when you go to legacy migration at all, you will think about PDF, maybe Word, Quicksilver, maybe SGML. But you probably don't have in mind the highly functional IETM as your legacy format. For us, the starting point was an IETM. And that required a very different type of an approach.


The project methodology here required very careful attention to details and understanding the way the IETM functioned. I mean this webinar is not necessarily about project methodology, but it's very important to know how to properly approach a conversion task. And we did few webinars about the subject of project methodology. And they actually own a website and you can go and look for them later on. But even though we're not focusing mainly on the project methodology, I do have to emphasize that, in any conversion methodology, you really do need an analysis phase before the conversion phase. And again, in this webinar we are going to touch a little bit in the case study, the analysis, the conversion, and the results of the case study that we did. Next slide, please. So, the analysis phase, and I should really say analysis and planning, that's a critical phase to any conversion project. With that project, without proper analysis, the chances of a project being successful drops dramatically. During our analysis, we had to understand a few things. The first thing that we had to do was to collect or, really, extract the legacy data, because the legacy data was stored in an Oracle database.


16:04

Some of the data was not even there. So, for example, data that is inputted by the user in real-time, and that's when you saw the slides that Chuck was going over, was not there. And it was controlled by variables or whatever. But that's why we really had to understand how the IETM was functioning, not necessarily just what the data is. Some of the data was called in from external systems or, maybe, a different Oracle database. So, we really needed to understand the relationship between all the Oracle tables and any kind of calls from outside the Oracle tables. We also had to identify patterns in the data, and not only text patterns that will be used for natural language processing later on, but also to understand what text is being auto-generated, how exactly cross-references are done, or are there anything that is being used or act as applicability?


And again, it was very important to analyze the output of the IETM and not just the raw data or the extracted data from the IETM. In one of the projects that we had, the legacy IETM was a very outdated IETM, but in another project, it was a very sophisticated IETM. We had to see how we can transfer these functionalities beside the text to S1000D and not lose the functionality and the data. So, it was not a one-solution-fit-all for all the projects that we did. And really, the case study is a combination of two, three projects that we did. It did require a lot of time developing the project-specific business rules and functionality matrix, but most of all, we had to make sure that all the stakeholders were on the same page and actually understand the same rule exactly the same way as the others. People that dealt with S1000D know that you can read an S1000D rule and come up with 50 different interpretation of what that rule means. So, making sure that all the stakeholders were on the same page was very, very critical.


And that's where Chuck, S1000D subject matter expert, had to play a big role sometimes explaining technical rules to people that are not necessarily familiar with S1000D, and also, how that rule will look when it goes to S1000D. So, there was a lot of understanding over there. And if you try to shorten that time, it's going to come and bite you later. Please, the next slide. So, once we completed the analysis, we can now move to conversion. And we will cover it more in detail later. But I will just say that again, I will repeat that the main mission of the projects were to ensure that all the data was accurately migrated from the legacy format to S1000D, but also maintain the functionality of the legacy data.


19:58

And really, we're focusing on process data module, but also, a big chunk of the data was converted to descriptive, procedural, IPD, fault isolation. So, it did require seeing how the data is functioning in the item and decide which data type it needs to be. Next slide, please. So, this is really the slide where I'm going to spend most of the time. That's really a summary before we get to Jeremy and what we actually did. But the approach that we took dealing with these projects was really a step-by-step approach. The first step was to get the native item application installed locally. Luckily, one of the item was fairly simple installation, but that wasn't the case for the other item. It was an Apache Tomcat application running against an older version of SQL Server Express, which was, off the bat, refused to work on our Window 10 environment.


So, in order to make the application run each Java AppletViewer, we were forced to install the application to run on Window XP mode. And we also had to use Internet Explorer compatibility mode on Microsoft Edge to get the thing to run properly. That was in the old item. After doing that, we were able to start looking at what the application of each item was doing in real time. Each application ran several SQL calls to display both procedural steps and to determine which graphics will be displayed. Step three, replicate functionality and capture data. Most of the data was sitting in an Oracle database, so we were able to take that and start mapping to S1000D, and that's going to be in the next slide. But replicating the functionality and capture the data that is created on the fly was definitely not an easy task. We especially had an issue with images. The legacy CGM images were produced by a very old software.


Another issue was that the CGM images were done in a non-standard way. It was not a true CGM and you could not open the legacy CGM in any current CGM viewer and still maintain all the layers that was embedded in the CGM. So, at the beginning, we had to use the legacy CGM viewer to see the CGM where it did allow us to turn on and off the layers, but we could not do anything with that CGM. So, the legacy CGM display was managed by a mix of database calls which would turn on and off the layers within that CGM image. In the IETM, the same image will be used in several pages with layers being turned on and off based on which page or action was being displayed. So, by viewing the webpage source, we could determine the layers were being turned on and off by an array of JavaScript's items. But the layer names in the database did not correspond to what they were called in the IETM-generated XML files. So, we had to create a – well, the technical team had to create web crawler to run on the IETM and validate the list of picture and layer the IDs. And this was accomplished by writing a Selenium WebDriver test suite to script and control the IETM and database application.


24:00

So, what I just covered in the putting it all together step is very technical, but we couldn't bring the entire technical team to the webinar. So, they did manage to dumb it down for me to understand. So, hopefully, I got that right. So, just touching the data that was generated. So, really, log files were being generated by the Apache Tomcat application of the IETM and presented an external XML file. Now that we actually have a list of the content of the database and, more importantly, have the ability to start tracing SQL calls that the application was invoking in real time, it allow us to determine exactly what layers were turned on and off and match it to the text in the Oracle database. And it also, more importantly, allowed us to see what the IETM was generating in actual real time. And as the application process step-by-step content, we were able to save, pass, and compare our export output for validation. And if we could go to the next slide. So, now, we are actually going to discuss how we took the Oracle database and maintain the relationship between the table, the two S1000D mapping. So, Jeremy, can you please walk us through the process of getting the data out and maintaining the relationship between the Oracle tables? 


Jeremy Love

Of course, Naveh. So, the first step involved a comprehensive analysis of the SQL database. I explored the schema, the tables, the relationships between the tables, different constraints, and then data types for the fields and the tables, really trying to get the whole picture of how all the data interrelates, because the interrelation between all the data and the tables, in particular, is really the fundamental understanding of what's going on. So, after identifying which tables and fields have the relevant data that needs to be exported to S1000D, we were able to get working on the process. This required close collaboration with stakeholders to find which data elements were essential and then how they mapped to the S1000D schema.


I designed a transformation process to extract the data from the SQL database and then convert it into S1000D XML. This involved writing SQL queries and some software development, also, to process the results of the queries and also making sure to extract the information hierarchically, so, the relationships between the data comes through into the XML. There is also a transformation layer which took the exported data and generated ultimately valid and BREX-conformant S1000D. The final XML documents, if you could go to the next slide, were structured and organized and prepared for delivery. These documents were compatible with S1000D compliant systems for technical publications and met the BREX for all of the various stakeholders. Some of the challenges we had were data inconsistencies.


28:00

Some database fields contained missing or poorly formatted data. We implemented data cleansing steps and work with stakeholders to address these gaps. There were also some pretty complex hierarchies mapping relationship, sorry, relational database structures to XML required careful handling of the relationships between parent and child nodes, and also iterative data processing techniques. And then, of course, there's the challenges of working with S1000D complexity and understanding the intricacies of the standard. Am I leaving anything out, Naveh?


Naveh Greenberg

No, and I think that the big challenge here was actually maintaining the whole functionality of the IETM. And Jeremy, correct me if I'm wrong, but part of the task was to handle all the data is variable and build the proper process data module to fit the functionality of the IETM. And it did require different technical aspect of things. So, once Jeremy was able to extract the fields from the Oracle database, understand the relationship, and build a map, which on the other side showed the relationship that this table calls that table and the variable for the process data module is inserted over there, we will talk about step by step, it was Chuck's responsibility to start dealing with the basic S1000D of assigning data module codes and assigning information codes and all that kind of goodies that you need to do when going to S1000D. And so, we can go to the next slide.


So, really, in this webinar, up to now, we were discussing going from an Oracle database, which, from one hand, it's more complex because you have to maintain the functionality, it's a lot more difficult to crack down the logic of the IETM and to transfer that to S1000D. But it's unique of going from one IETM and Oracle database to S1000D, especially with process data module. But we did have to – part of the – not only of these projects but also other projects where we had to go to an ad functionality to data that is not very smart. An example over here, we went with a fault isolation because it's interactive enough but not a really process data module. The project methodology is not exactly the same on how you extract the data, but it's not that much different.


You still need to analyze the legacy data. You need to develop conversion software and production process, definitely, to develop business rules and functionality matrix, develop a QA process, and all the things that you need to do to any kind of conversion to S1000D. So, even if your starting point is paper or PDF or Word, you can still develop a process to convert the legacy data into a fault isolation. And even a process DM, but a process DM there will require a lot more subject matter expertise because the data is actually missing altogether. So, once you extract the data, then you have to include subject matter experts and equipment specialist to understand exactly how the functionality is working.


32:03

So, here is just one example. And we actually may be showing some of our secrets, but you can see how taking an approach of normalizing the data takes a regular flowchart and puts it inside a column, inside a table, standardized table. And people that don't even have any knowledge on S1000D can translate that flowchart into a table, make it consistent with a set of rules, and obviously, define them that, when you see this thing, this is the name of the ID, the top is full codes. But you always put it in the same location. And the yes or no are being put in the last two columns. By standardizing it, you make it easy for conversion software to properly tag it to a full isolation in this case.


But more importantly, actually check that the legacy data, which 100% was not written with S1000D in mind. You can correct the legacy data. So, if a step has a question, a no step or a step without a question or there's a yes but there's no branch, you can correct that, in this case, in a Microsoft mode and really enrich the data and bring it up to the new S1000D world. So, we're not limited from the legacy format aspect of things. And in some sense, when your starting point is not smart data, you actually have the freedom to manipulate the data and make it ready to be converted to S1000D, in a format that a lot more resources out there can be used.


Chuck Davis

Real quick, one thing I just wanted to point out that I've heard over numerous conversion efforts and whatnot was, like you said, you're going through and you're finding instances where there may be missing content or whatnot, but when you do a conversion of this magnitude on your content, any holes that you have in your data are going to come to light. They're going to come forward. And so, it's important, I guess, in instances that I've seen, it was important to look at that as a chance to correct that content instead of looking at it as a derogatory, a ding on what you had. Like you said, the data was it's old and it was, in no way shape or form, written to S1000D. But this gives you a chance to go in and you find those instances and you're able to address them. So, anyway, that's all. 


Naveh Greenberg

So, that's actually a good point because, as we said, the difference with the legacy data, having the legacy data is an IETM, it was at that format more ready for process data module. The image that you see over here, yes, it's not ready, but as you said, if we bring it to Microsoft Word, in this example, you can give that to an equipment specialist and you can fill in the blank. He doesn't need to know S1000D. Most equipment specialists are just expert on the system and not on S1000D. But just giving a little bit more secrets, let's say, as you said, if you have a step just with a question,


36:00

and you develop – and we had to actually develop made up steps that were missing over this – so, if the question was, let's say, is the pressure 15 PSI, we could have created a step said check that the pressure is 15 PSI. We could generate that and equipment specialist can write it or you can develop a rule, but you can enrich the data before it gets to S1000D. So, if we can go to the next slide. So, what did we achieve by migrating the data to S1000D, especially using a process data module? I guess I will start with intelligent content and go counterclockwise. The process data module integrate intelligence into content, especially real-time intelligence. So, your tasks are more accurate, you have less mistakes done by the maintainer, it's a lot easier to audit the process.


It also builds content reuse. Here, we have a true content reuse by storing the data in a common information repository and we use it across multiple location. And this is just one example of how content we use can be improved. Task efficiency, by driving the content flow through user driven input, the process data module helps a user to efficiently complete tasks. And real-time feedback, it allows the IETM to accept real-time feedback from a monitoring system and initiate an external process as needed. Actually, think that one thing that is actually missing is probably easier data sustainment, because once the data is in XML and definitely and especially when it's an S1000D, sustainment of the data becomes a lot more effective. So, unless any of the two of you have anything to add, I think we are ready for, maybe, end for questions and answers, if there are any questions.


Marianne Calilhanna

Thank you. So, do want to invite our attendees to ask any questions. You have three of our leading S1000D process data module experts here, so get some free consulting. So, one thing, a question that came up, Naveh, how do you overcome challenges of legacy data not properly structured in S1000D to get to that process data module?


Naveh Greenberg

So, we touched it actually in the last another slide before. There are two ways. I think it's a lot easier actually to accomplish that when your data is not smart. Because part of the task, the initial task, once you get into conversion is to extract the data. An example was fault isolation, but let's say, even if you take a simple maintenance task with a set of steps, it may say refer to step seven, step seven is not there anymore, you can catch it upfront, parts lease catalog, if you're missing a part number, you find it before you get into S1000D.


39:56

And the goal in our conversion is to bring it into an intermediate format where we can hand in the extracted data to subject matter expert. And in many cases, depending on the conversion, let's say an equipment specialist or an expert on the system, they don't need to know anything about S1000D. They just need to look at the data and say "This specific cell is missing information." Well, the equipment specialist looks at it and said "That's not a problem. It's this part number. It's this cage number. It's so on." One thing that can be found before is the flow of the data. So, if let's say there's references to a task that is not existed, that was not there at that time, because during updates, when you do updates for data, is Word or PDF, going back and fixing any references from other location, we saw many, many cases where it's missed.


So, you may refer you to tasks some task number so -and-so that doesn't exist anymore. So, by using keyword, let's say it may say refer to installation procedure for this so-and-so and it gives you the task number, by doing investigative work, we can list all the potential task that it needs to be linking to. But I think the fault isolation was a great example of how a lot of data that is missing can be inserted to a simple format such as Microsoft table. As long as you normalize the data before you go into S1000D, you solve most of the issues. 


Marianne Calilhanna

Is the process data module widely used by people who've already gone through that S1000D version? 


Naveh Greenberg

I'll actually let Chuck answer. But I think that I want to distinguish between process data module that is authored. If the weapon system is brand new and you author the data from scratch, it's a night and day difference from when you need to go from legacy format to implement S1000D. 


Chuck Davis

So, as far as whether or not it's widely used, I guess, to me, that's a relative term. However, I do know it is used significantly across multiple platforms that I'm aware of. A lot of the platforms that I'm aware of that are using process data modules, it is definitely newer aircraft, things of that nature. And due to the nature of the content, as far as being secret or the clearance required to view that data, a lot of the process data modules, they're in their own sandbox kind of situation. And so, they're not as readily, not as readily shown because of the content within it. However, with Naveh talking about fault isolation and some of the projects that we have, the conversion efforts we've done, instead of having just a false isolation data set, where your maintainer or the end user is having to go in and it'll say "Is it this? No. Is it this? No. Is it this?"


44:00

And it goes to a list of possibilities that it may or may not be. It may say "What is the fault code that you're getting?" And so, then you just put the maintainer would enter that data in and it would automatically drive you to where you needed to be. And so, I have seen a little more of that, some of the legacy data going into S1000D and utilizing the process DM. But I hope that answers your question. 


Naveh Greenberg

And I just want to add, S1000D, specifically, but process data module depends a lot on this item or the solution or the system that is being used. It's one thing to tag the data. It's another thing to make it function. And there are system out there that can handle process data module. Some say they do. Some did. Legacy system, I believe it's handful that are being implemented. And Chuck said new system, because it's structured from nothing and data is created from nothing, it's, in that sense, more easier to implement process data module because you start everything from scratch. 


Marianne Calilhanna

How long does a project like this take? 


Naveh Greenberg

Well, I mean – so, again, every project is different because the volume of data varies from project to project. It's safe to assume that small manufacturer or people with not a huge amount of data, process data module may be overkill for them. So, it's usually the big players that are using S1000Ds, people that have a very sophisticated system that they need to represent. So, you're talking about systems that probably have, and I'm throwing a number anywhere between as low as 50,000 pages. I'm translating it to a matrix that people can understand because a lot of it is that pages, it's some inactive system, 50,000 to as much as million pages. So, project can vary. It's not something that will take months. It's something that could take years. Some projects were done in a year and a half, which is insanely tight, but it's not allowed of data. But you need to take into account easily three years to implement a system that, migrating to S1000D, in general, testing, and implementing, and fielding, it's a year project.

 

Marianne Calilhanna

What if my content is classified? Can DCL still support and help? 


Naveh Greenberg

Yes. But I think that we do need to put S1000D in a class itself and process data module in a different scenario, because it is different animals, really. Yes, process data module is part of S1000D, but implementing it is insanity. Usually, when you have a secret data, not everything is secret. It's a small percentage. It could be few manuals. It could be some section of that. And we can convert that and the system can be done and the conversion can be done at the client's facility.


47:59

We do have a client that we embedded our conversion software incorporated with our conversion process. It was embedded into an Amazon box and was placed in their facilities. And again going back to the project methodology webinar, that is very actually good if people go back and review it, conversion, especially to S1000D, there's a lot of automation that is being done to get to S1000D. It's not a fully automated process. It's a very critical thing to understand. So, that's why when we embedded, we put the conversion software in an Amazon box and they're doing it over there, there's still a conversion process, which is dealing with extraction of the data, normalizing the data, QA-ing the data. It's an entire process. It's not just the conversion software. So, putting it in an Amazon box, that's a solution for secret data. But again, most of the project, if there is secret data, it's a small percentage, which can be solved either by doing the conversion in the facilities, going the fall for a little bit or handling that in-house.


Marianne Calilhanna

All right. Well, that, we've come to the end of our questions. I don't know if there are any additional thoughts that either of you want to share. Quickly, I do want to share that I will send a link to everyone today to that project methodology webinar. I think that would be informative for those of you who've taken the time to join us today. Anything else to add or you've said all?


Naveh Greenberg

I mean, just there's some echo. Yeah, there's some echo, so I won't say. You can reach out to Jeremy and Chuck 24/7. They don't mind, all day and night, they'll answer your questions.


Chuck Davis

Just anything that anyone has any questions about, definitely, feel free to reach out. And we look forward to trying to help out anybody we can. 


Marianne Calilhanna

All right. Our colleague Leigh Anne just pasted a link to a blog post that we thought everyone here might find interesting. It's called "Success with the Process Data Module." At the end of that blog post is a really good webinar we did a few years back with Chuck and Naveh. It was an Ask Me Anything around S1000D. We had a lot of really good questions submitted, so that might be a good refresher to share with your colleagues. And I just want to thank everyone for taking time out of their day to attend this webinar, the DCL learning series that comprises webinars like this, informative events. We also have a monthly newsletter, our blog that I mentioned. You can access many other webinars that relate to content structure, XML standards, artificial intelligence, and more from the on-demand webinars section of our website at dataconversionlaboratory.com. We hope to see you at future events. And thank you so much. This concludes today's program.



bottom of page