WaDaYa KnO!

Getting surprising results from strategic project management, business analysis, content management, development, and delivery

2011 Trends in Technical Communication

By Gina Gotsill, TechProse TechProse Marketing and Proposal ManagerMarketing and Proposal Manager

“What’s different this year?”

Gwaltney Mountford, East Bay Society for Technical Communication president launched into the chapter’s annual Trends Panel with this question for panelists Meryl Natchez, Linda Urban, Yas Etessam and Jeff Gardiner.

Since last year, the technical communications landscape has changed quite a bit, panelists said at the March 4 meeting in Danville, CA. Here’s an overview of the trends:

1. The good news is the job market has improved. However, while there are more jobs, they may be confusing to navigate because in many cases, technical communicators aren’t simply writers anymore. They have titles such as “Community Liaison” and “Content Curator.” In these roles, they provide content but they are also interacting with users and deciding which content is relevant to their audience.

2. Organizations are talking about the business value of editors. Yas Ettesam cited an IBM study that shows that edited pages are 30% more engaging to readers than unedited pages. Read more about the study here: http://writingfordigital.com/2010/07/04/a-fourth-of-july-lesson-in-the-value-of-editors/

3. In the past, technical communicators volunteered to monitor wikis and other customer-facing online documentation. Today, more organizations are seeing the value of hiring technical communicators to own this critical documentation. That means no more cramming the task of monitoring the wiki into your already busy schedule.

4. Our brains are changing! Jeff Gardiner discussed how the human brain changed when we went from an oral tradition to a printed tradition. Now that we are moving away from reading printed content to reading chunked, online content, our brains are changing again. (An interesting book on this subject: The Brain That Changes Itself, by Norman Doidge.) We have shorter attention spans. Enter Twitter, where users create 140-character tweets to communicate about everything under the sun. Linda Urban suggested that Twitter is a good way to “see bubbles of discussion.” It is not a bridge to mastery, she says, but it is a means for gaining awareness about many different topics. Technical communicators who aren’t familiar with Twitter or the ubiquitous hash tags need to jump on the bandwagon – now.  Try following areas of interest, or tweeting yourself. Jump into a public wiki to read or edit.

5. Shorter attention spans and an interest in reading chunked material means technical writers need to be training themselves to write in minimalist forms. They also need to be able to organize and categorize data and use meta data to help users search for what they need.

6. Historically, technical communicators have not been very good at selling their work as a corporate asset, Meryl Natchez said. But there are many ways to promote what you do. Natchez suggested technical communicators celebrate their achievements on the company Intranet, promote links that inform co-workers, and network at conferences and within their organizations. “You can’t just sit in your cubicle,” Natchez says. “You have to let people know what you’re doing and communicate your value to the organizations.”

Urban suggested that technical communicators actively look for ways to improve processes and the organization as a whole. When you suggest ideas, leaders see you as a problem solver, not just as a technical writer.

Etessam suggested technical communicators speak up and ask if they can participate in other areas that are relevant to their work. For example, don’t be afraid to ask if you can attend the design meeting. Exploring other areas of the business often provides insight that improves your practice and longevity in the organization.

Gardiner suggested technical communicators find out what their organization’s marketing people are talking about. What are the buzz words? What are the pain points in the organization? Be aware of trends in the organization and make suggestions that are in line with them.

Advertisements

Time to Stop Seeing Documentation as a Necessary Evil

By Steven Laine, TechProse President

Steven Laine, TechProse President

I’m interested in your thoughts as I ponder the future of publications and content within corporations: Does content need to be owned at a corporate, senior manager, maybe even C-level (as in CEO, CFO, CIO, etc.), rather than at a technical publications manager level to get the full strategic value out of the content?

Here is the background: I was at the Intelligent Content conference that Ann Rockley’s group put on earlier in February. It was a great conference, very targeted, intense and useful.

The concept of intelligent content is compelling, logical and so common sense – let’s find, capture and reuse our own stuff so we are more consistent, efficient and responsive across the entire organization from tech pubs to training to sales and marketing and beyond.

Makes sense, right? Yet attendees at the conference kept coming back to this question:  How do you persuade management to make the investment in DITA tools and methodology in order to reap the downstream benefits across the organization?  How do I as a technical writer or trainer get management to free up dollars now so that we can have better content down the road?

We all know that documentation and training is usually looked at as (yes, I’ll say it) a necessary evil. Corporations provide documentation and training to the user community almost as an obligation, and they aren’t happy about it. That’s because (generalizing here) documentation costs a lot of money, is used infrequently by a small audience, and does not generate revenue or good will. If the discussion centers instead around intelligent content as a corporate asset however, the “necessary evil” aspect is covered. The conversation shifts, and content is no longer an obligatory offering. Instead, it is viewed as part of overall corporate strategy. In addition, corporations begin to view the base material in the repository as an asset capable of generating revenue and indirect income. Anthony Allen, Director of Production for the American Society of Training and Development (ASTD) provided one example of this. During his talk, he described how ASTD’s sales people can now create custom content packages for each buyer by using intelligent content technology to combine a chapter from this book, a chapter from that one, along with a white paper and some blog entries.

Of course, the possibilities extend beyond revenue. Corporations now have the opportunity to create a differentiated brand that will attract users because it’s easier to derive value from your content than from a competitor who does not use intelligent content.

So, again my question is this: Do we need to create, champion,  or develop a new role, something approaching a Chief Content Officer (CCO) or a level or two below in the corporate structure so that documentation is no longer seen as a necessary evil? How would that affect the tech pubs model? Would it evolve in an interesting way? Please comment freely and in any forum you choose. TechProse is active on LinkedIn, Twitter and Facebook.

What comes around goes around

Aren’t there people you love to work with because they’re just wonderful at what they do? These people are not just your comrades at arms; they are your friends! I recently checked in with one of these folks, Linda, who is a superb substantive editor. Inspired by the chat we’d been having about how things change but are not really new, she launched into a discussion of her current assignment.

“Guess what! I’m back to working on a project for HotShotCompany!”

“Oh yea.  What are they up to?” An obligatory response from me, thinking there was nothing particularly special about that.

“Converting from Adobe FrameMaker to DITA,” Linda teased knowing she would pique my interest.

“Hmmm, as I remember you worked with that company quite a while back?”

She paused for an instant to consider the wheels of time and said: “It’s been over eight years, and do you know what I was doing then?”

“Can’t even guess!”

The chuckle in her voice said it all, “Converting from Interleaf to FrameMaker!”

The grey-haired writers out there will be chuckling right along with Linda. For those of you who are lucky enough to have the sparkle of color in your hair, read on and you too will be chuckling.

The point is that the old is becoming new again, and ever-spinning cycles double back on themselves.

A tag by any other name …

Once upon a time, authors tagged content with formatting instructions as they wrote. Applications, such as Interleaf and WordPerfect, used simple tags that looked a lot like HTML to specify content formatting. The tags <b> and </b> would create bold text. Seem familiar?

When HTML first came out, we tagged content for formatting, again by hand, as we wrote. In each of these cases a What You See is What You Get (WYSIWYG) fully formatted view was not initially available. That was the way it was and we all just accepted it without a second thought.

However, the natural evolution of software applications seems to be from painful complexity to ease of use. Eventually, the newest versions of existing applications and brand new applications all promoted WYSIWYG, fully formatted view in favor of hand tagging. We all eagerly jumped on the band wagon and promptly forgot, or tried to forget, all about hand tagging content. We were totally carried away with turning text blue or finding new fonts. It was a fun new world! There was color, shape, graphics, and all this stuff we could not even see before. All we saw was <b>and</b>.

Now, fast forward to today and drop standardized XML markup languages into the single-source well: Standardized Generalized Markup Language (SGML), DocBook, and Darwin Information Typing Architecture (DITA) to name just a few. These are the languages used in the single-source world to tag content.

In the old days, we tagged content to convey formatting instructions. Today, we tag content with XML markup languages to convey information about content, in particular the structural nature of the content. XML tags don’t say, “Make this text blue.” Rather they say, “This content is a task, or a step in the task, or a command in the step.” Tagging in this way is an advanced feature of single sourcing and requires deeper expertise and special tools to support it.

Once we have consistently tagged the content, a publishing system can be configured to do different things with the tagged content. For instance, the same topic with the same XML tagging can be published with fancy formatting to multiple output targets like PDF, online help of many flavors, HTML, and even Microsoft Word, if you want. You can even publish to your iPod, and you don’t have to change a thing in the topic or the tagging. All the formatting magic is done by the publishing system.

Now that is this lazy author’s nirvana! I can write a topic, tag it once, reuse it in bunches of places, and publish it to whatever output is in the fertile imaginations of corporate entities. I focus on the content and let the publishing system consistently handle all the pesky formatting issues.

Go deeper

Drink from a well you didn’t dig

Single-source is not a new idea. It might have a fancy new name, there might be a lot of new tools around that include its basic features, and there certainly is a lot of buzz about it, but in theory and practice, it’s been done! This fact should give you a nice warm-fuzzy because you can be assured that you are not stepping out on the bleeding edge when you decide to take a peek at single sourcing.

In 1993, I stumbled into my first single-sourcing project. It was a context sensitive online help system. The core features of the product I was documenting were reused in several modules. I quickly realized that meant I could indulge my inherent laziness—write a topic once and use it in many places. The last thing I wanted to do was to copy that topic all over the place, remember where the heck I put it, and then manually update it in all those places. That is my idea of hell! Instead, I authored self-contained and generically written topics, stored them in a simple file system, and updated them there. I wrote batch files that sequenced topics, reusing what I could. Then, I used the batch files to populate and run the help compiler that came with the Microsoft Windows Software Developer Kit. Voilà! I had a single-sourced help file.

To save my reviewers time, I noted where reused topics occurred in the help TOC and asked them to review these topics only once. Reviewers thought that was just peachy! At the end of the project, while my colleagues were sweating bullets trying to update all their copied topics, I just ran my batch files and went home.

In addition to folks like me who were just trying to save themselves a lot of headaches, the people at Information Mapping (IM have been teaching writers topic-based and chunked authoring techniques since the late 60s. Unstructured FrameMaker has had basic single-source features, such as sequencing a series of files by referencing them in a master file, referencing a file in another file, specifying text to display under specific conditions, and using variables to represent text strings since the early 90s. We can also thank the makers of online help tools, such as RoboHelp and Authorit, for promoting single-source features in their software. And even Microsoft Word has features that can be used to support single-sourcing. As a collective whole, the well of our contemporary single-source strategies and tools is quite deep and has evolved under the forge of real-world writing projects. You already use some of these tools and have the expertise needed to use the basic features of single-sourcing.

Catch the wave

Many of us have been working for years in our own little single-source worlds, using whatever tools we could get our hands on, just to do what seemed obvious to us. We went to conventions, shared our tips and tricks, and showed off our results. We got excited when one of us found something that worked and quickly jumped on it. Finally, the tools manufacturers got wise to us and incorporated more robust features into authoring and publishing software to facilitate single sourcing.

Single sourcing is no longer a rare-bird practice of a hardy few. It is an industry best practice that delivers a significant return on investment. Just imagine, you pay to author, review, translate, and publish reusable topics only once. The accuracy and consistency of those topics is maintained automatically. And, your organization can share reusable topics across multiple departments and interest groups, increasing your corporate knowledge transference.

There has never been a better time to be a lazy author and give single-sourcing a try. That’s exactly why we’ve started this blog. Here you can follow us through a single-source implementation in all its glory. We’ll address all the common myths and concerns about single-sourcing, as well as share some of our secrets and, of course, opinions. We encourage you to add in your two cents and share your experiences.

So grab your boogie board and let’s get wet!!

Go deeper

Coming soon…

 

Post Navigation