top of page

DCL Learning Series

S1000D – Ask the Experts

Marianne Calilhanna

Hello, everyone, and welcome to the DCL Learning Series. My name is Marianne Calilhanna. I'm the VP of marketing here at Data Conversion Laboratory, or DCL, as we're also known. And today's presentation is titled "S1000D: Ask the Experts," in which we invited you to submit any question you have related to the S1000D specification. And we have our two experts on staff to answer the questions. So we collected some questions during the registration process, and we also invite you to submit any additional questions that come to mind as we discuss things today. Before we begin, I do want to let everyone know that this event is being recorded and it will be available in the on-demand section of our website at dataconversionlaboratory.com.


So today I am really delighted to introduce my colleagues and today's speakers to you. We have Naveh Greenberg and Chuck Davis. Naveh is Director of US Defense Development for DCL, and he's a PMI-certified project management professional with more than 20 years' expertise in large-scale complex conversions that use a number of DTDs and standards. He specializes in conversions for DCL's defense and tech doc business units, and has been instrumental in developing DCL's DITA. The Army 40051, S1000D and 38784 conversion software suites. He also works with our clients to develop detailed project business rules. Naveh is a member of the United States S1000D Management and Implementation Land Working Group. And he's been with DCL for more than 20 years, and he holds a BE in mechanical engineering from Stony Brook University. Welcome, Naveh.


Naveh Greenberg

Thank you.


Marianne Calilhanna

We also have Chuck Davis. Chuck is DCL's S1000D and IETM subject matter expert. Chuck also has more than 20 years of experience in aerospace tech data and tech data management for commercial and military clients. He wrote the S1000D aerospace business rules for the US Coast Guard and was on the integrated product team for the development of the S1000D business rules for the US Air Force. Chuck has developed both project rules and conversion strategies for multiple airframes converting to S1000D. He deeply understands the tactical approach to implementing S1000D and then of course the strategic benefits that it brings to any organization. So, welcome, both of you. And before we begin, I thought let's just get everyone on the same starting page. And let's give a brief overview on S1000D in terms of the business benefits. So Chuck, Naveh, over to you.


Chuck Davis

Okay. So, as far as the business benefits, of course, with S1000D, it is an international standard that allows you to put your content into, put your data into a highly structured content, allowing for data reuse and being able to view that content in a viewer and allow for interactivity. Naveh, do you have anything you want to add on that aspect of it? 


4:18

Naveh Greenberg

Yeah, I mean, it is, and I think maybe the next, the first question will cover a lot of what we want to discuss, but it is a way to standardize your data, whether – it's not necessarily in one industry; it's across many industries – and allow you to lower the sustainment cost and build repositories and filtering. But I think going to the first question would be – 


Chuck Davis

Yeah. So, first question?


Naveh Greenberg

Yes.


Chuck Davis

So, the first question is Why is S1000D the ipso facto standard for the aerospace industry? So on that S1000D first of all, it is not the ipso facto standard for the aerospace industry. However, it is widely used throughout the aerospace industry and is definitely gaining traction. So with S1000D, one of the reasons for that, and it is an international standard, and the committee for S1000D handles the specification aspects as far as it goes. And so that in itself keeps large corporations and especially small corporations from having to develop a standard and implement that and having to do that on their own. So that of course is a big cost savings. And the S1000D committee provides the schemas and things of that nature as well. So it's definitely a benefit going out across the industry.


Naveh Greenberg

And I wanted to add that in addition to being an XML standard and a non-proprietary that anyone can use, not only the schemas kind of forces you to standardize your data, S1000D has the whole concept of business rules and a Brex that actually further restricts or standardize the data. So, really, it forces everyone to play the same or to map the same way and to handle data. And that way, also make that exchange a lot easier. And because we keep hearing create once, use many, it is a great way to lower the cost of sustainment, translation, distribution. The whole concept of applicability is kind of a way to use the same data module, but to distribute it and use it to more than one customer. So it is a good way to standardize your data because in the past you had standards that had schema or DTDs, but it was still open to different interpretation of how to use the data, how to tag the data. And that's what that is, a little bit more robust in that way. And at the same time, because you have the project-specific business rules, it also allows you to be more specific in specific weapon system that does not apply to other systems. 


8:15

Chuck Davis

And I think definitely, you mentioned "author once and use many times." It is also, with it spreading and gaining popularity, you also have the benefit of, in the past, you would have a company that would produce something one way, and then the customer would, they didn't use quite that format. And so having this implemented has definitely been a benefit. I've seen it already in several different instances where the customer is not having to figure out how to best utilize the data or convert it to their format, because you're already in a common environment.

 

Naveh Greenberg

And also S1000D is used across kind of industries, but it is a good way for suppliers to submit the data in somewhat similar look and feel to an OEM or to a service that receives data from many, many suppliers. And once that OEM, let's say, develops, or that organization develops their own business rules and their all ways to check the data, it is a way to implement standardization across all the suppliers. It will lead to other discussion in later on in the webinar. But it is a very powerful way to attempt to standardize all the data that you receive.


Chuck Davis

Okay.


Naveh Greenberg

I think we're ready for the next question. What is the best way to convert my data to S1000D? So we usually hear three kind of different ways when we interview the client, it's always, oh, either "I'll manually hand-tag the entire data set," or "I have a fully automated process to convert the data," or maybe "I have a combination and I'll just clean it later." And it's not, it's really not that simple and clear cut. First of all, finding the right balance between automation and manual process is really, really difficult. And it is in many cases kind of a case-by-case scenario, and it's not one-solution-fits-all. And if we, especially going from legacy to S1000D, it's really truly like trying to fit a square peg into a round hole, but there are ways to approach conversion to S1000D, and I'm really focusing on legacy conversion to S1000D because if you author the data in S1000D, it's a little bit different, but really with any kind of project and some of the things that I'm going to cover is, not only ties to S1000D, but it plays a bigger role when you deal with standard that is very detail-oriented.


So you really need to plan your process. You even need to plan how you're going to plan the process, and that's not necessarily just developing the business rules and the project-specific business rules. And it's really analyzing the legacy data and checking for anomalies, how tables are structured, do graphics have text layers on it? Chuck and I actually, we, Chuck and I actually met on probably the first large-scale legacy conversion for the Air Force at that point.


12:11

Chuck Davis

Yes, it was.

 

Naveh Greenberg

With S1000D. Let me emphasize that. And really before there was even business rule finalized for that organization, and over there – and Chuck can sort of chime in because he felt the pain more because we came more of a conversion house and he was more of a subject expert – but analyzing how tables are structured, and if graphics have text layers, or if there are references to pages, what do you do when it says refer to page two? And now page two is only a combination of data modules. And something would shock, plays a huge role in subject matter expertise is, how do we break down the data? What if it doesn't have standard numbering system? What if it doesn't have data? And I think Chuck can talk a little bit on what kind of steps you can do when the data is not structured properly.


Chuck Davis

Well, when you have content, like you said, when you have a legacy conversion, you do have, you definitely have some hurdles that you have to deal with and it's in, you have to have,like you said, you have to have a plan. And when you're dealing with going in and trying to – if you have to apply triple-SN numbering to your content, and definitely if you're going through and applying info codes and things of that nature, and you have to make sure that it's done correctly, it's not just a, it's not a one-size-fits-all. You definitely have to go in and pay attention to your data. And not that you have to read it, but you need to go in and you have to know what your content is talking about. And so it's good to have someone in your corner that can help you achieve, get your data and get it into a spot where you can convert it correctly. You have a roadmap in place and move forward like you were discussing.


Naveh Greenberg

And just some things that maybe people don't think initially, I mean, it's very simple to say planning, but the planning, the more you plan up front, really the less you do later on, and – 


Chuck Davis

Oh, yes.


Naveh Greenberg

One thing that we actually did is try to do a content use analysis on the legacy data. A lot of people think you do it later on when the data is tagged, but there's multiple reasons why to do content use analysis up front besides the obvious reason of dealing with less data in finding the redundancy. But it is a quick and easy way to give you clues as to where applicability can be used or where you can chunk the data. And so, really, you really do need to know the legacy data very well in order to have a good plan to go to S1000D. I think another critical point, which, again, it's an obvious point, but for some reason, people keep ignoring that, is really knowing and defining who all the stakeholders, but most importantly, who's going to be using the data.


16:03

Chuck Davis

Right. And I know that's part of some of the things we'll discuss in one of the questions, I think.


Naveh Greenberg

So again, so, defining the stakeholders is very critical, making sure that everybody's really on the same page and understand all the rules because in S1000D you can have a rule and you can have 20 different interpretation depending on the people reading. So you do need the subject matter expertise really in all kind of level. And I think before we move to the next question, there's like two or three more items is, really, go slowly at the beginning, do a proof of concept, or do a conversion set, or maybe think about the approach of converting data sets first instead of just manuals. And when we talk about lessons learned, that's something that maybe want to discuss later. But that touches the last point that I wanted to go is definitely lesson learned. Go over, if anybody previously did it and I know that a lot of people don't like to share any kind of lesson learned in the past, it's very difficult to find people that will be willing to share the finding, but if it was done before successfully somewhere else, or even unsuccessfully learning the bad.


Chuck Davis

Oh, yeah. You don't have to reinvent the wheel.


Naveh Greenberg

Correct. Next. So what software tools are necessary to convert IPB or IPB database data into S1000D data module for publishing to IETM? So again, the way we read this question is really what's the approach or what, really, tools are needed to take any kind of IP parts catalog data and convert it to S1000D. And it's really a factor of what is the legacy that you're starting from? I mean, is it an old IETM, is it an Oracle database? Is it already in SGML or XML? Is it PDF, FrameMaker, Word? The initial approach will be a little bit different. I mean, we specifically sort of develop our scripts. And throughout the year, we sort of perfect the routines depending on project to project. So we're not, as Chuck said, reinventing the wheel for every project. Some components are reused from project to project, for example, how to handle tables or how to extract tables, it's done the same way, but in an IPD data, you do need to know where the data is residing.


So if it's PDF, you need to know that column one is the index, column two is the part number, but it could be different from project to project. So the software per se, in our case, it's software that we perfected, I don't believe there's kind of a magic software kind of out of the box, but that's the software that we use, and tools specifically for the images, we actually do have out-of-the-box tools where, out-of-the-shelf tools where we hotspot the graphics, but we do have a process that reads the hotspot and places the tagging in XML. And also we need to define, and that's what the planning takes a critical role, is where the metadata resides in the legacy format. A lot of time you have legacy format, it's not necessarily paper or PDF. It could be a database that holds all the data. And Chuck can even talk about a few projects that we already did, where we had that challenge of taking data – 


19:55

Chuck Davis

Well, when you have data that is in a old PDF or paper manuals, things of that nature, you were talking about metadata. A lot of times that, or one thing I've seen a good bit anyway, is that customers will say Well, we want the metadata to give us these, our data to give us these capabilities and this type of interactivity. And it's things that were not present at all, because in those paper manuals, there is no real metadata per se. And so needing to define functionality, and then the business rules, I think, is a big part of what, like you said before, is a big part of any conversion process. And also you need to be careful of the situations where it seems like there's a super quick, easy fix on some – or easy conversion, because a lot of times, if it sounds too good to be true, it is. And not to discount some capabilities that exist, because there are some things that we have done and others as well that have made leaps and bounds in this area. But it's not just a press-a-button. And I think one thing that Naveh mentioned was you got to find the right balance between automation and manual intervention in these situations.


Naveh Greenberg

Just to give an example, I mean, I mentioned before the whole concept of fixing later. When you have 50 pages and you spend five minutes a page, that's fine. I mean, it's still very difficult to be consistent in adding hotspot tagging, or you can only imagine how time-consuming, but how difficult it is to be consistent in the way that you do the tagging. And another thing that I wanted to mention is if the legacy data is actually PDF or Word, sort of the benefit, actually, is that a lot more subject matter experts in the system know Word more than XML, so for them to manipulate the data or restructure the data, not necessarily create, but even if you have the client enhance the data before conversion to S1000D, it's a lot cheaper and easier to do it with the legacy format like Word or stuff like that than anything else.


So, there's always – it's not necessarily the end of the world if your legacy data is PDF; we did a lot of conversion from PDF or Word or whatever. So there is a balance of – and that's why the initial stage is very different between a PDF and an SGML or an Oracle database and FrameMaker. The initial phase is different. The goal is to get to a point in the process where it is similar once you get to that point, and then you really don't reinvent the wheel every project. I think we are ready for the next question.


23:57

What are some lessons learned from data conversion? Yes. So we did touched on it a little bit in the previous questions. I think one of the most important things is definitely plan ahead of time, and plan your process and document, and really the statement of the more you do upfront, the less you have to do later and the less cost that it's going to be is definitely a true statement. And do involve subject matter experts like Chuck or even subject matter experts not necessarily on the system itself, on S1000D or any system of the legacy data. Any subject matter of the legacy data is very, very critical and involves all of them up front.


Chuck Davis

You have to have all the stakeholders involved and the final users. You can develop something and have an idea in your head, the way you want it to go, but if it's not implemented in a way that the end user is not going to utilize, or it's cumbersome, then don't use it. And that's terrible, but that's the reality of it. So I think that's a big thing and it's good to definitely have their input, how things will, how the IETM will flow, and your content will flow through the IETM, and then make sure that they know that just, it's a matter of, you're going to do everything you can as an organization, or as, for whatever you're working on to make it as usable as possible, but that there are limitations as well. Whether it's the structure or the software or what, there are limitations to things that can be done. So I think that's a big thing, to make sure everyone's on the same page.


Naveh Greenberg

And we discussed the analyzing of the legacy data for reuse, for anomalies, for what's missing and what's extra with S1000D because it's really fitting a square peg into a round hole, but also discussing the whole – everybody that discusses S1000D knows, okay, business rules have to develop, business rules have to develop all that kind of stuff. But I think what people are missing is everybody needs to be involved and you need to do a walkthrough of the rules to make sure everybody is on the same page.


Chuck Davis

Yes. Well, one thing in past projects I've worked with is with the business rules, like you said, to make sure everyone's on the same page. In dealing with the Coast Guard, one of the things that we had to work with, one group would say, we would discuss a rule and so one group would say "Well, that's not necessary. We don't need that." But the next group would say "That's extremely important to what we're talking about." And so having someone that – one of the big roles I played at in that was trying to explain, I guess, the nuances of how it would implement and what it actually meant to use this or not to use it, think if you would lose functionality or lose metadata or what, but a lot of times it was people just did not understand exactly what it was asking.


28:09

And so they would dismiss it or they saw it as Oh, it's just a function, we want as much as we can get, which, I understand that. But sometimes it can become more cumbersome trying to implement something than what it may be worth. And so it's really just making sure you take the time. And like you said, have everyone involved that's going to be involved in the implementation of the IETM to make sure everyone's on the same page.


Naveh Greenberg

And another item, which may sound like a contradiction to what we always push for, is the whole concept of automation. There is no push-of-a-button solution to S1000D. Now, when we emphasize automation, we really mean automation from one step to another step. And for another step to another step. Just an example of, we're talking about legacy format, the extraction of text to a Word format could be done both with automation and manual aspect, automated QA, you could do that, but you need to focus on that specific step that you're dealing right now. So if you're dealing with text extraction, yes, you can use automation. You can develop automated QA tools, but you do need the human intervention at some step either before or after. Maybe even during. You have to find that balance which is so critical. You're not going to spend days and days of a developer that might be more expensive if somebody can look at the data, if it's only few pages, but if you have a huge volume of data, you do need to focus more on automation. It is a very critical kind of balance between automation and manual review. And again, don't reinvent the wheel. I mean, some steps are similar from process to process. You may need to customize them from point to point.


Chuck Davis

Question five. So when is Config and SDC, DiscoDev, IncoDev, or handled via inline applicability and where do you draw the line? So of course, this is talking about applicability. And honestly, really when you're talking about your applicability and where do you draw the line, you want to get as much functionality out of your IETM as possible, of course. And so if you can use your attributes in these tags in order to separate your content by tail, by of course conditions and all the different types of applicability.


However, as far as where you draw the line, that's going to be on a business case analysis and a case-by-case situation, really, depending upon how far down the rabbit hole do you want to go? How deep do you want this to go? How structured? Some companies and organizations will say Yes, we want S1000D. However, we are very lenient on a lot of the rules and things of that nature. And then you have other organizations where it is extremely structured.


31:58

And so it's really with this type of question, honestly, I think it's really just going to be a case-by-case analysis. 

You need to do a business case on what system you're working with, and then how in depth and how structured do you require your content to be? Do you have anything?


Naveh Greenberg

No. I just think that this is a great example of how during the analysis phase and the planning phase – 


Chuck Davis

Oh, yeah.


Naveh Greenberg

...and wouldn't be nice to have a way to find, first of all – if you had a way to do a content reuse on the data upfront before you even do any conversion and you see how much you can use, how much is more redundant, where do you have potential applicability? That's something that could help a lot to that decision. But if you have very few amount of pages or just few manuals here and there, that's one way, but if you have a lot of data and you sustain it, you need to translate the data and you need to – that's when sort of applicability plays a bigger role.


Chuck Davis

I remember you would mention a situation as far as, and I don't know if it was applicability, it may have just been reuse, and I apologize if it wasn't applicability, but in one, there was a situation you had told me about where there was a 30%, they found that they wouldn't have to convert 30% of their content because of, was it applicability or was it redundancy?


Naveh Greenberg

It was actually a combination. We had – we still have – a client that was going that initially did the process manually of training the redundant data, but using a copy views analysis, they found out that 30% of the data can be reused one by one. So it's safe to guess that, safety summary, front matter. Yes, okay, everybody knows that's an area that you're probably going to have some use. You don't know how much we use, and you don't know exactly where the reuse is, but you can potentially go over there and do that. And there is automated ways to find it, but at the same time, finding those one-offs, so even if it's not duplicated, if it's a close match, if you have one more different, and if you can see that picture, that's a huge time saving to know where applicability can be used and that client can save a lot of money.


Chuck Davis

Okay. Next question. Can hotspots be enabled in procedure data modules provided the data files are from an IPD data module? Okay. That's kind of an in-depth question. I mean, hotspots can work that way. Yes. However, you could get into a really tricky situation with that. You can have, of course, multiple layers on a graphic. And so if you had a graphic that was the same background picture in an IPD and you took that same background picture, and you put a different layer, you removed the callouts from the IPD layer. And you put a different layer on that graphic for, say, removal and replacement of some – it was the same picture, but it would have different step one, step two, step three, like a procedural data module would, then theoretically, yes. You could do that.


36:00

And I do know of organizations that have had multiple – they've used the same graphic or the background layer of the graphic. And then instead of having numbered callouts, would just have, like, a lollipop call out. And that way you didn't have to re-number a graphic, but even in that it's, depending upon how deep your applicability concept goes, that could really determine how much of this type of thing you'd be able to accomplish in reusing your graphics from your IPDs for procedural content. Have you guys seen any of this? Not in anything in the past really much of this, or?

 

Naveh Greenberg

Not as much. I mean the graphics – again, if the labels are the same, you can reuse it obviously anywhere. Layers of graphics. No. I mean, it's something that will be touched on ongoing kind of existing projects that we have now. But not that much, but again, as you said, I actually, initially when I read this question, I dumbed it up for people like me, because I'm not at that level as you, but I thought the question was if you use a graphic in IPD, if it has the same hotspot and procedural, then the answer is yes. But if you do have text layers with applicability on the graphic, the answer is also probably yes. And just within it complicates the process a little bit more. It makes it a little bit more difficult. Next.


So is the logic engine something we could build in-house or is this more specific to a specialized vendor resource? So again, if I read the question or understood the question properly, the question is, here, can anyone basically build an engine that will convert again, any kind of data to S1000D? And again, it depends also on the formats, but it's not a simple process to create an automation conversion script to take data to S1000D. Otherwise, we wouldn't be around for so many years, but what's important to emphasize: migration to any kind of standard of XML, but migration to S1000D, and that's what people may mistake and oversimplify the conversion, it's not a conversion script. It's a conversion process to go to S1000D. That's a very, very, very critical thing to understand. Just because you have a script that, again, never happens that legacy data is always the same. And you take that, you convert, goodbye, you convert. It's very highly customized and that's where you analyze the data.


You have the business rules, you have two layer business rules, you have the MRL to create. So you could bring up the concept of the 80/20 rules. Can you automate it to a level that – yes, you can, but every case is a case-by-case. And throughout the years, we developed scripts that can be reused.


40:03

So even cross references or tagging tables or creating the data module, the concept is the same. A table is a table, but a table could be broken down into a procedure. If you have one column that had the step, the next column is the action itself, but cross reference. So we do have – we're not reinventing the wheel of treating cross references, but you do need to customize the process of how do you recognize the cross reference? Does it say, see, figure two in X manual, see figure two in T-O so and so. Or just about figure 10 years ago, figure that 10 years ago, there's a lot of keywords and patterns that makes it very, very difficult to have, again, one solution that even gets you to the 80/20 rules. But when we approach a project, we do need to customize it. And some of it is done by the business rules, because of the difference, but some of it because the legacy data is so different. I don't know, Chuck, if you want to add -

 

Chuck Davis

Really, I think that it would be safe to say that – I think, like you said, the 80/20 rule does not really apply to this. And because it is something that could vary, your legacy content could vary so greatly. And then also what your desired output that you're looking for, it really is something that when it comes with a logic engine in conversion process, I feel like it's something that you need to be very careful as far as if you start going into this type of situation, knowing how you're going to go about it and having things set up, rules in place. Like what you were saying, it's very critical. So that's really all I would have on that. Just make sure it's very careful in how you're approaching the situation.


Naveh Greenberg

Which leads again to the point that that's why planning is so critical. 


Chuck Davis

Yeah. 


Naveh Greenberg

Finding those tools are very, very critical because software and automation, it's basically taking a rule that you develop after analyzing the data and automating it. So if you can define it, you can automate it, but you do need to define it and you do need to understand that 100% it's going to be different from project to project. And let me assure you that it's probably also different between section to section in the same manual.


Chuck Davis

It'll be different within that project. So like you said, it needs to be defined, but like you said, understand that even when you define it, you will find anomalies throughout your data where that definition does not fit what you were trying to make it fit into.


Naveh Greenberg

And let me just give two quick examples: something simple, something complex. Something as simple as a table that spans over pages. What happens in some table, it suddenly jumps from five columns to six columns. What if you have row merging that's sort of in between cells? Or maybe if you define it up front, it does a beautiful job, but what do you do with the table that spans over multiple pages?


44:03

Or something a little bit more complex than a fault isolation. We all know what happens in the legacy data. You have an arrow that tells you yes, no. And the yes goes to a different page. You need to define those rules. And you may come to the conclusion in analysis phase up front, you know what, the best approach over here is to extract the data, put it in a format that we can automate, put it in a table format or whatever. And then you can really actually check the validity of the legacy data, because I guarantee there's going to be a missing "yes" link or "no" link, or it's going to link to a question that doesn't have a branch. Just one thing for sure is that legacy data is not consistent. But you need to find a way of where that inconsistencies are and what do you do with those inconsistencies. Which, again, brings you to the point of planning.


Chuck Davis

Next question. Is the process data module only triggered by a fault identified from feedback from a conditions-based monitoring system? That's a great question, actually. So, short answer is: no it is not, or it doesn't have to be anyway. That is one way that you could trigger a process data module. However, you can go into a process data module just on "I'm going to change a tire," something that's simple. Are you changing the tire because it's time or did you have a blowout?


There could be any number of reasons I'd change a tire, but there could be any number of reasons why you are performing said maintenance action. So if it's a fault, then yes, but it could be – if it was – well, I guess it was based over a time period; that wouldn't be condition space monitoring, and it can be triggered from a fault isolation or a fault identified from feedback. But like I said, it can also be, it could be any number of actions that could drive going into a process data module.


In one example that we've dealt with recently, we were able to convert some content in order to provide the functionality that was desired. We were able to convert this data to a process DM and it performed well, but it was very in-depth as far as how it flowed. And then that's one thing also to take to account when you're dealing with process data module. And this is, I guess, looking at it from a conversion aspect, the process DM is interesting and people like the concept of it. But if your data, your legacy data is not structured to provide that flow that you would normally would get out of a process DM, then you could wind up being in a much more costly environment because you may have to rewrite data, rewrite content in order to allow it to flow in the way that you would gain or benefit from a process DM.


48:19

So I guess, like I said, it can be triggered by a fault, but there's also many other ways that a process DM can be – you can go into a process DM.


Naveh Greenberg

I don't have much to add, but as Chuck said, sometimes if the legacy format is actually an existing IETM, and they do want to mimic the way the text flows, then yes, it was a kind of a difficult task. But with a lot of planning, we were able to automate the process to do it. But again, it depends on the situation and it depends on really a case-by-case. But yeah, we really saw it on a way that if text flows a little odd and doesn't follow, and because people are used to the legacy format but still want to go to S1000D, the process then does play a role over there besides fault. Is it difficult to provide the same content to different customers that have different Brex? So that's also a very good question that depends really on the Brex. How different the Brex is, number one. If there's some rules that contradict each other, then you need a separate – I mean, because you do point to the Brex, you do point to the Brex. Again, really, the short answer is really depending on the Brex, how different the Brex are for one customer to another customer.


Chuck Davis

I think if you had one customer that had more structure, you could maybe go backwards a little. If one did not require as much, but allowed more, you could take what was more structured and still deliver to one that was more lenient, but going the opposite direction, I think you would have much more trouble there. And then also that could really – you get into issues of different applicability and things of that nature, like Naveh said, it's doable, but it could be an extremely in-depth process to do that, and could wind up being more trouble than it's worth. I wonder, but I don't know; like you said, it would depend on if you're a case-by-case analysis, really, on that.

 

Naveh Greenberg

And you do need to take into account sustaining the data. How do you woek out this, and again, it brings back to the point of planning upfront and knowing that variables before you make a decision, it might be, know, keep separate or even use the same data mod– different data module, which are very, very similar, but just because the Brex is different, use it differently. But it is, really, unless we know specifically the case, it is a case-by-case, but it could get very, very difficult. Or if you probably separate it, it's not.


52:06

Chuck Davis

Well, like you said, I mean to produce it one time, it might be doable, but to sustain that data in that same way could be extremely difficult. Okay. Will there be a CIR with the capability to reuse acronyms in the near future? So, really, I don't know that I've heard of using a CIR for that. However, there is the ability to tag your acronyms and then you would reuse – when you tag it as an acronym in the beginning, and then you use that and you just call back to it as you tag your content, and it should be able to with S1000D, you have the ability to just reuse that same acronym over and over again, which is similar to, I guess, a CIR in a sense. But it's really just using the tagging structure and the schemas that are provided, definitely allows for the use and reuse of acronyms throughout your content. And really, if you're able to run software on your content for data reuse, that's something that could easily be able to pick up on and then you know where to reuse those acronym tags throughout your data. Do you have anything extra?


Naveh Greenberg

No. I mean, actually thinking back to the previous question, what if the business rule was one, in one case use common information repositories and the other one was do not? That will complicate using of Brex, but no, that's really where, that's why Chuck is talking about this question and not myself. He will know that information, but no, I don't have anything to add on that. So Marianne, I don't know, unless some questions popped in, which, I don't -

 

Marianne Calilhanna

We do have some questions that come in, and I can see right now that we're not going to get through all of them, but let's at least tackle, at least tackle, see how many we can hit. So, one question is How widely is the process data model used in the industry? This person is looking to deploy, but they're struggling to understand how that logic engine is derived.


Chuck Davis

So, it's, I guess it would depend as far as how widespread it is. I know it is used. Now, how far and wide it's used, I'm not real sure. It's, the issue that I've seen, and Naveh, please jump in, is with the process data module, is the sustainment of that process data module is the biggest hurdle that I guess I've seen. And you can have someone who can program or write your data module to provide you the content.


56:02

But if you try to sustain that data in-house without the knowledge of how to do that, it can be very in-depth and can definitely be a hurdle out there for those that are trying to sustain your data.


Naveh Greenberg

And to add, if you were to ask me that question a year ago, it would've been totally different.


Chuck Davis

Yes.


Naveh Greenberg

I would have said "What? That's crazy." But again, it depends – your legacy data, number one. An example that we spoke about a project that we did was that the data was already in an IETM format and we needed to mimic that flow, which the only way to do it is process DM, and that's a big usage. So there are examples. I think people shy away from it because it's very complicated. And a lot of time they can accomplish that with different kind of tagging, fault isolation or whatever. But it's really, again, we keep coming back to it, the case-by-case, but that's every time you talk about S1000D, you work on case-by-case.


Chuck Davis

It would definitely be something to talk with them and find out how they were going to be implementing the process DM. And as Naveh said, if there's options available, other tagging options available to still get functionality out of your data, that may be a better fit for that business model.


Marianne Calilhanna

Okay, so, speaking of the business model and business choices, this is going to be a really challenging one to answer. So I'm just going to have to cut you guys off at a minute. S1000D and DITA, we know that there are similarities, but what are some of the differences and basically why would a company prefer S1000D over DITA?


Naveh Greenberg

I mean, I always categorize, people that are familiar with DITA and S1000D, I always say S1000D is DITA on steroids. That's basically, DITA is really dealing with three kind of – about three; it's changing – but you have a concept, you have tasks, you have reference, you have topics, you can highly customize it, but if you need to understand and categorize your data more in detail, like okay, understand that this is a fault, understand that this is a procedural, understand that this is descriptive, S1000D is really the only way to go. But it's really, again, because of the short time, I would say S1000D is truly DITA on steroids. A lot more options to filter your data, use applicability, use process, a lot more content-driven, but DITA on that end has a lot of out-of-the-box kind of solution. But again, it's really what kind of data do you have.


Marianne Calilhanna

I would invite, when questions like that come up, please use DCL as a resource. You can always email info@dclab.com. And I will personally facilitate that question to one of our – of course, we have Naveh and Chuck on staff, but we have a lot of other highly skilled technology experts that help answer that.

 

Naveh Greenberg

We're really not biased to either or the other, so we'll give you true kind of – 


Marianne Calilhanna

Right. So if we didn't get to your question, we will follow up personally with you, but we, it is time to bring this to a close. I want to thank everyone for attending this webinar. Naveh, what you said about there's not a script, it's a process, that was really potent advice and a great way of looking at that. So thank you all for spending a little bit of time with us today. I just want to remind everyone that the DCL learning series comprises webinars, a monthly newsletter, we have a blog. You can access many other webinars related to content structure, to S1000D, to other XML standards, and more from the on-demand section of our website at dataconversionlaboratory.com. I do hope to see you at future webinars and hope everyone has a great day. This concludes today's broadcast.



bottom of page