Key Takeaways
-
EDUCAUSE Review editor Nancy Hays reflects on the progressive changes in technology throughout higher education (and society) over the course of her career.
-
Many of the new technologies finding a place in higher ed perform as promised on installation; some do not.
-
The newest technologies, such as artificial intelligence applications, require thoughtful examination of the risks of deploying them, especially their potential to invade student privacy.
-
As technology becomes more complex, so do the IT department's evaluation, implementation, testing, and modification of systems to meet institutional needs and goals.
Longevity in technical editing and writing — in economics, medicine, aerospace, software, computer hardware, information technology, instructional technology, and learning spaces — makes up a career of 35-plus years, as I come up on retirement at the end of 2017. The English language skills required to help academic technology writers make their work clearer and more engaging to readers haven't changed, although my effectiveness in applying them has improved with wide experience. What has changed, then, over this personally significant expanse of time? Technology. With a capital T. Bolded and underlined.
The changes in technology have affected nearly everything in my life, especially publishing, with the most frequent challenges involving learning how to use new software programs (QuarkXPress in the early days) to pull together and distribute publications online (now Sitecore — very different from the waxed galley proofs on pressboard I started with). On the personal side, I can't imagine living without a mobile phone, Wi-Fi, and an increasingly smaller personal computing device (although I still don't like to read on small screens like those on smartphones). And yet, during the first years of my career, none of these technologies were available. When working in aerospace, I edited in red pen on paper printouts from mainframe computers. At one point I "babysat" the company's supercomputer because of my security clearance, bringing my reams of paper and red pen with me. Today, the same company would no doubt rely on a server farm or data center to conduct many of the necessary computations. Or even PCs on each engineer's desk. (But not the cloud — too insecure.) I was fortunate to work in WordPerfect and then Word to edit articles after just a few years using red pen and paper. Yes, publishing advanced that fast.
A major benefit to any editor or writer is a joy in learning all kinds of things. I've certainly benefited from curiosity about scientific and technological advances. Thinking back to my school years, though, I find myself envious of the opportunities available to students today. I remember the tedious work of memorizing everything to take the tests, looking things up in hardbound library books to research the topics of papers, and writing those tests and initial papers by hand. Luckily, I had a typewriter later in college and in graduate school. With bottles of Whiteout and reels of correction tape to fix my errors. The exciting advent of personal computers meant faster writing, easier correction and revision, and the ability to research concepts via browser rather than trekking to the library only to find the vital resource already checked out. Unfortunately for me, these advantages came after I finished my graduate degree.
Even more impressive today are the advances in user interfaces, from highly interactive games to virtual reality environments where users can manipulate molecules to construct potential new drugs, for example, or explore long-destroyed archaeological sites such as ancient Rome. I have no doubt that education would have engaged my mind more deeply and stayed with me much longer had my instructors used those and other technologies to teach their courses.
Using a cool new gadget doesn't guarantee improved learning or increased graduation rates, of course; many more issues affect student success than the tools used in the classroom (face-to-face or online). Research into these gadgets, programs, and online techniques and how they affect learning is absolutely vital to determine which approaches to implement and which to shunt aside as ineffective. That type of exploration, written up by the explorers, has informed much of my working life for the past 30 years.
Ancient History
When I began working for the IEEE Computer Society in 1985, everyone used a PC for work. What a delight! We could exchange files; communicate with authors, editorial boards, and graphics specialists; and even nag each other about deadlines via e-mail from home, saving considerable time and frustration over paper-based processes. As long as electronic drafts, communications, and organizational systems stayed clear and easily accessible, the usual publishing headaches remained at a minimum. Then, with the advent of the World Wide Web we could suddenly publish our work online and include hyperlinks, interactive graphics, and eventually audio and video. Amazing. I also learned about cutting-edge tools and methods, from geographic information systems to visualization of data to making computer-generated hair look realistic. Evidently that last one was a hard problem indeed for the computer graphics community.
The volunteer editors-in-chief and their editorial boards came almost exclusively from academia. They handled the solicitation of articles and the peer review of submissions. They and the authors researched, developed, and studied the technologies relevant to their fields, and then shared what they learned through publications and teaching. In those early days, much of the teaching used lectures as the main format, although grad students got to play with some of the cool new toys as part of their PhD research. Data collecting, cleaning, mining, and analyzing took the most effort and generated the most arguments about approach, with little attention to who owned the data. Just like today, although that's beginning to change.
In 2000 I joined the staff of EDUCAUSE as editor of the peer-reviewed journal, EDUCAUSE Quarterly, or EQ. The publication had no editor-in-chief, relying on a volunteer editorial board for peer review. The association had relatively few employees, having been created by a recent (1998) merger of Educom and CAUSE. Brian Hawkins was president, and D. Teddy Diggs was editor and publisher of the new EDUCAUSE Review. (She still is, by the way.) We had quite a bit of freedom in what we published, and I had the opportunity to immerse myself in articles about enterprise hardware and software and pedagogical IT. Teddy handled the strategic level, and I handled the practical articles — case studies, good advice, and tutorials from all over the world. We had/have a wonderful time meeting authors and volunteers by e-mail and at EDUCAUSE conferences, with authors and visitors coming from Europe, Japan, Australia, South America, Africa — around the world, really, although more from English-speaking nations. Clarifying the meaning of technical language with non-native speakers proved a real challenge at times, more than balanced by the thrill of publishing their work. And the authors were patient with multiple e-mails back and forth until we agreed on the final wording. Over time, I found it most efficient to ask for a proposal or outline first, so I could help guide the approach and coverage. That made the process for the authors faster and easier, from writing to editing to publication. Editors do prefer happy authors!
I've also found it helpful to work with groups of people in a Google doc, going back and forth with comments and revisions until everyone says "done." For example, when long-time EDUCAUSE peer reviewer and friend Shalin Hai-Jew asked me to write for the Kansas State C2C (Colleague to Colleague) online publication, I wrote about peer review and "regular" publishing in a Google doc before she took the final version ("Reach the Right Editor and Pass Peer Review") into the online publication in Scalar. EDUCAUSE colleague Gerry Bayne helped produce a summary video for the article as well. That type of collaboration is more fun for all of us, I think, although I've also had lengthy group phone, Skype, or Google+ conference calls with authors to talk through questions. Pardon the trite expression, but it really is the people in the higher ed IT community who make our jobs at EDUCAUSE so rewarding.
The higher ed IT community also has provided most of the articles published in EDUCAUSE Review, which benefits from not just the volunteers' active involvement but also their advice to colleagues to publish their work in the magazine. Some of that enthusiasm arose from the desire to "reach the right audience," while the rest came from the magazine and association's solid reputation in the field. I won't claim the magazine has been the major player in developing that reputation, as EDUCAUSE events — especially the annual conference — and working groups/committees have drawn many people into the vital exchange of advice and information that occurs in IT and in teaching and learning with technology. The advent of a research arm (ECAR) and then the Core Data Service gave EDUCAUSE the data and tools to help CIOs plan, budget, and even partner with peer institutions. Teaching and learning has played a major role in EDUCAUSE nearly from the beginning, with the ELI Annual Meeting proving popular and dynamic. That community expresses a dedication and pleasure in their work not often evident in other groups. I have delighted in attending the ELI meeting and the EDUCAUSE annual conferences, as both offer incredibly rich programs and the opportunity to meet fascinating people doing amazing work, including from other countries: SURF in Holland and educational technology groups in Japan regularly send contingents to the annual conference.
What About Technology?
Already familiar with much of the technology used in commerce and education, I found it straightforward to begin learning about the technologies specific to higher education. The big problems at the time ranged from providing bandwidth and reliable networks to making sure classrooms had the audio/video tools the instructors needed. Schools needed heavy-duty number-crunching software, data warehouses, and access to supercomputers to handle research needs, for example, and they wanted much more for their teaching mission: on-demand tutoring, early-warning systems for potential student failure, classroom scheduling, advising reports pulling from different sources according to need, and integration of everything. That last one still challenges IT departments, but the rest have seen excellent progress. I find it amazing what faculty and administrators can do to help their students succeed, and how much they can discover about what's not working on their campuses so that they can fix it. Now, that is. It has taken years to transition from paper-based processes to digital ones that foster communication and collaboration. Among people in the same department, I mean, not necessarily across campus. Because why? Integration of software programs from different producers.
So what hard-to-integrate systems am I talking about? Student information systems, learning management systems, financial records systems, enterprise resource planning systems, fiber and Wi-Fi networks, physical and online security systems — there's a solution for nearly any task in higher education, but their producers have few incentives to make their programs work with anyone else's programs. Proprietary systems provide a lot of benefits to the companies that produce them, and only when pressure increased to let programs talk to each other did producers begin to add those capabilities. Capitalism at work.
Capitalism and Higher Ed Technology
Much of the world's population lives under some version of a capitalistic economic system, and I don't mean "an even playing field" type of capitalism. That doesn't exist. (Apologies, John Maynard Keynes.) Higher education has never been immune from the surrounding marketplace or the politics of the host country, and yet the ideal of furthering individuals' education, researching the world around us, and advancing public knowledge remains an enduring goal. It's natural enough that making a profit marks all of these endeavors in some way, although a few notable exceptions still exist.
Think of the Library of Alexandria. Utterly destroyed centuries ago, that worldwide exemplar of a library has taken multiple online forms: the Bibliotheca Alexandrina, the Internet Archive founded by Brewster Kahle, the University of California at Santa Barbara Alexandria Digital Library, and the Google project to digitize books, the Digital Alexandria Library. Like the physical publications, however, the digital publications require ongoing maintenance, especially to keep up with changing interface standards. I certainly don't have a floppy disc drive any longer, and I didn't keep my vinyl recordings of music, either (unfortunately, as they are now prized antiques). Just as many physical works disappeared over the centuries, so will many digital works disappear as curators decide where to devote their time and resources in converting digital works to new formats.
Another wonderful example should come to mind immediately when you think of "free online tutoring": the Khan Academy. Many for-profit tutorial services offer a variety of courses as well, one of the best-known being Lynda.com. And remember the many free massive open online courses, or MOOCs? Which brings me right back to capitalism in education: companies have sprung up to offer paid services, such as otherwise "free" MOOCs that charge students who complete the course to obtain a certificate of completion. Lynda was purchased by a much larger corporation, despite keeping its name. Colleges and universities can't afford to operate without covering their costs, either, especially as public support has nosedived. Given adequate reserves, some can offer scholarships or free tuition for qualifying students, but most students pay increasingly hefty tuition to attend the higher education institution of their choice. And here comes the digital divide, partly caused by income and education differentials between lower- and higher-income parents.
Schools have attempted to address the digital divide by providing computing devices to all students, often supported by large corporation giveaways (think: Apple). Surveys have shown that most students have at least one device entering college — a smartphone. Many have multiple devices (a challenge for networks and bandwidth on campuses everywhere). That doesn't mean they know how to use common software like spreadsheets or word processing programs, even though both are vital in the business world. (If you want a lot more information about student use of technology, see the 2017 student study; for research into student and faculty use of technology, see the related ECAR hub online.)
To address the dearth of truly useful instructional technology at the college level and meet sometimes idiosyncratic pedagogical needs, many faculty and instructional designers have turned to developing their own tools or customizing publicly available tools. For example, large lectures could rely on clickers or other feedback devices to help the instructor survey the students for their understanding of the topic. Or, the instructor could ask students to use their smartphones or laptops to respond to quick polls using free survey software. Both approaches allow the instructor to project graphs showing the responses as they come in. I've seen articles and conference presentations on both, and the "free" approach seems to have overwhelmed the paid one. Imagine that. Google has accomplished much the same thing with its free docs and spreadsheets, yet users do pay a price, just not a monetary one. Remember, Google promotes itself as a search engine, not a provider of educational tools, even if many of us use it that way.
What does all this mean for the further development of educational technology? Newly successful small companies can find themselves quickly bought out by larger corporations, which have a vested interest in promoting their own products, not supporting skunkworks-type projects. Their innovative idea could end up buried, or perhaps subsumed into a big-name branded product that costs considerably more than the original developers had planned (assuming idealism over outright profit seeking, which might be far-fetched). Those academic tool developers who customize software models and sometimes hardware purchases to better fit their needs (like adding specialized mapping equipment to drones) may find themselves facing copyright or patent issues or newly offered equipment that replicates their approach right off the shelf. Even if their customized tool works beautifully, they still have to deal with scalability and wide distribution of their product to make it truly useful to educators. Think of collaborations where colleges and universities share the expenses, such as the Internet2 NET+ project, or any open-source collaboration. The community develops and debugs the software, and for-profits spring up to offer services around using the software. A recent collaboration between Michael Peshkin of Northwestern University and Matt Anderson at San Diego State University resulted in an open-source product they call Lightboard. Users can buy or build [http://lightboard.info/] their own.
Meanwhile, St. George's University in Grenada used a similar technology in a MOOC created for students in developing countries, as explained by Satesh Bidaisee in an EDUCAUSE Review article earlier this year. His lecture, during which he drew on a glass board, was videotaped and used in the MOOC. Note that although Bidaisee is "the content guy," the online MOOC team included a program director, an instructional designer, and a technical support person. The university also developed its own platform to serve up the MOOC. Not necessarily a scalable approach for worldwide distribution, and yet it serves the university and its target audience well.
Up and Coming Technologies
Have you learned about these new buzzwords yet? Blockchain. Artificial intelligence. Virtual reality. You've probably already heard of educational uses of audio, video, data visualization, and 3D printing, and I believe they have already proved their value in and out of the classroom. I want to briefly explore the newest members of the technology portfolio available to higher ed.
Blockchain
Originally developed to handle Bitcoin transactions, blockchains are a software construct that puts computers in charge of a distributed and secure database of ownership. (IEEE Spectrum published a special report on blockchains [https://spectrum.ieee.org/static/special-report-blockchain-world] that I highly recommend.) Given that cryptocurrency exchanges have been hacked, a secure record looks like a great idea. Certainly banks, healthcare operations, and other businesses have jumped on the blockchain bandwagon without adopting Bitcoin. And although blockchain technology can be hacked, doing so is difficult and expensive for the hacker. Which offers an opportunity for higher education and the lifelong student.
Consider an adult learner with the following educational background:
Dropping out of high school to work, our learner obtains a GRE three years after the expected graduation date and joins the military as a computer security trainee. On leaving the military, our learner joins a small technology company offering continuing education funding and quickly completes certification programs in two coding languages, cybersecurity basics, and cloud technology. The technology company then agrees to support this employee's part-time enrolment in a four-year college with a major in computer science. Seven years later, our learner has earned a BS and a promotion to senior programmer in the security department. Three workshops in cybersecurity, networks, and privacy earn three badges verifying successful attendance and demonstration of skills, and our learner moves to a large banking company as a junior security expert. Certification from attending a MOOC on advanced cloud technology further rounds out the learner's professional skills. Then a multinational banking services company buys this small bank and tells staff that they must reapply for their jobs because of an overlap in functions. How does this learner/employee "prove" the breadth and depth of skills gained when the new owners ask only for high school and college completion records?
While attending college this learner created and maintained an online portfolio of prior educational attainment, adding new diplomas and certificates as earned. When blockchain technology became available, our learner moved each portfolio entry to a blockchain entry, and then kept the blockchain current. The new owners' human resources department easily verified the accuracy of each blockchain item and recommended continued employment for our learner. On joining the security department, our learner introduced blockchain technology to the banking services company as a secure record-keeping method and set up training for the management team and financial specialists.
Does this seem like an overly optimistic success story? I see the blockchain as a secure way to maintain formal evidence of all educational achievements, including certificates and badges, and to streamline record keeping for both the learner and all the institutions granting those proofs of performance. Learners could still use e-portfolios to store their projects while benefitting from the more secure blockchain to store educational, financial, and legal documents for easy access throughout a lifetime — and even after, depending on access rights granted.
Following a successful pilot during summer 2017, in mid-October MIT announced the debut of student-owned virtual credentials using blockchain. Although not first, MIT is an early adopter. Central New Mexico Community College announced in mid-November 2017 that it would offer some students use of the blockchain for their diplomas and transcripts beginning in mid-December. I would expect other universities and community colleges to follow their lead.
Artificial Intelligence
This is the buzzword that either frightens or exhilarates people. Do you think of AI as a threat to jobs, wellbeing, or even human survival? Join the larger part of the population looking ahead to increasingly wider use of AI. Or do you see AI as first a helper and then a partner in "achieving greater things for humanity"? Your group is much smaller.
The prognostications based on fear and hope cover the next century or so, and I probably won't see an end result in my lifetime. Meanwhile, AI has arrived in its first implementations, some of which look limited in function and amazing in ability. The Go-playing program from DeepMind, called AlphaGo, beat world champion Go players handily, yet quickly lost to its subsequent iteration, AlphaGo Zero, which taught itself Go without studying any human games. Ask either program to do anything other than play Go, and they will fail — despite being best in the world at the one thing they are designed to do. I would label this level of AI as the possible job eliminator because these programs can use machine learning to become expert at a single task among a lot of different ones.
Creativity is an exception. This might reassure some of us. For example, AI programs already write a number of news and sports stories for publication. They do not determine interesting personalities (often not the main players) and interview them, however, or decide which paragraphs and photos will most appeal to readers. I can certainly see AI programs replacing proofreaders, copy editors, and "filler" writers fairly soon (in the next 10 years?) without having the combined knowledge to replace a publication's top editors and writers. Yet.
One of the scarier things now becoming evident is our inability to understand how an AI program reaches its decisions. A controversial example involves a peer-reviewed article by Michal Kosinski published in September 2017 about an AI program that identifies people's sexual orientation based on images of their faces. The public reaction was swift and hugely negative, although the results showed a high degree of accuracy. The potential misuse of such information roused considerable alarm and skepticism. As noted in a November 21, 2017, New York Times story by Cliff Kuang, we just don't know what our algorithms know or how they know it. That hasn't stopped governments from attempting to force transparency, though, as Kuang observed:
In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law "could require a complete overhaul of standard and widely used algorithmic techniques" — techniques already permeating our everyday lives.
[...] The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand.
Given that humans and machines "think" in very different ways, achieving this understanding may take considerably more research. Meanwhile, AI programs of all types continue to proliferate for multiple purposes in multiple fields. For example, as Hayley Tsukayama reported in the Washington Post (November 27, 2017), Facebook has begun using AI and pattern recognition to stop suicide broadcasts:
Facebook said that it will use pattern recognition to scan all posts and comments for certain phrases to identify whether someone needs help. Its reviewers may call first responders. It will also apply artificial intelligence to prioritize user reports of a potential suicide.
Given the problems that social media raises on an unprecedented scale, applying advanced technologies to help address those problems seems an appropriate response.
Michael King gave an overview of AI in higher ed in his August 2017 EDUCAUSE Review article "The AI Revolution on Campus," which I highly recommend for both background and advice. He concluded:
The coming decades will see a new wave of personalization enabled by big data and artificial intelligence. Higher education has the potential and the imperative to lead that transformation.
In education, AI programs can help with online tutoring (see the video on IBM's Watson as tutor), learning and assessment, and even basic counseling (courses needed for a chosen career path, financial resources available). They can save faculty and adviser time for those things software can't (yet) do, such as creating fluid immersive or experiential assignments or talking through career options to take advantage of changing student interests. The human touch and creative solutions to students' problems remain beyond the reach of today's machine learning. For now.
AI has already demonstrated its value in education (see the interview with Sebastian Thrun), healthcare (medical imaging, records management), business (IBM's Watson, for example; see also the interview with Jeff Bezos on Amazon's AI and machine learning strategy), and consumers (self-driving cars, recommendation engines). Nonetheless, AI remains in its infancy, with major growth predicted. People involved in education can expect benefits and challenges from the options that will become available and should prepare to join the conversations about what will best serve learners and help meet institutional goals. (For a quick overview of the potential opportunities and concerns, see David Shulman's Leadership column from March 2016, "Personalized Learning: Toward a Grand Unifying Theory.")
Virtual Reality
Virtual reality has been defined as a way to "simulate a user's physical presence in a virtual or imaginary environment." From the beginning, virtual reality has hyped benefits that it hasn't delivered and ignored the potential downsides. With a resurgence following decades of near somnolence in the commercial world, VR is also facing a backlash against the idea that it can create empathy through an immersive experience. As Inkoo Kang wrote in Slate (November 21, 2017),
The sensory immersion that VR facilitates isn't context, or a wearing down after repetition, or having an open mind about the subject at hand. ... [F]eeling something, even deep in your gut, isn't the same thing as understanding it. ... VR isn't brainwashing. No matter how advanced the tech gets, it can't make anyone care against their will.
Decades ago, I tested a VR headset at a computer graphics conference. It was heavy, bulky, and awkward with all the wires running out of it. The graphics looked primitive and simplistic. And yes, I felt dizzy and nauseated when I took it off (women seem more susceptible to nausea when using VR headsets). I much preferred the CAVE, which debuted in 1992. The CAVE (Cave Automatic Virtual Environment) was invented by Carolina Cruz-Neira, Daniel Sandin, and Thomas DeFanti at the University of Illinois. The immersive effect results from high-resolution displays projected on the walls, ceiling, and floor. Users generally wear 3D glasses to see the 3D graphics. Those of us in the computer graphics community were pretty excited about the whole thing (I worked at the IEEE Computer Society at the time). Best of all, this technology, which arose in the higher education community, found a welcoming home in universities around the United States.
CAVEs have served multiple research purposes in higher education, with one obvious restriction: Users have to actually go to the site and walk into the CAVE to do anything. And, building a CAVE and creating different environments for it takes a lot of funding, much of which comes from grants and commercial entities interested in such things as designing new drugs. CAVEs are great for things like that.
What about individuals who would like to provide or experience immersive environments? That takes a lot of money, too. The efforts to reach consumers took the path of headsets and funding from entertainment companies, whether focused on artistic experiences or gaming. One of the most widely known is the Oculus Rift, which retails for $400. Much smaller and less expensive devices also provide an immersive experience, with the best known probably Google Cardboard, which works with a cell phone. All of them strap onto and around your head and leave you blind to your physical environment.
What does this mean for faculty who'd like to use VR in the classroom? Bryan Sinclair and Glenn Gunhouse of Georgia State University tackled this subject in the March 2016 article "The Promise of Virtual Reality in Higher Education." As they noted, software development continues to outstrip the hardware and memory storage needed, yet they see promise in VR for instruction. Maybe after it scales. In their examples, one person uses the headset while others watch a 2D video screen. As long as relatively uncomfortable and sometimes expensive headsets provide the gateway into the virtual reality worlds designers dream up, I don't see wide use of truly immersive VR in classrooms as a viable approach.
I do see some hope in efforts by special interest groups in EDUCAUSE and Internet2, whose Metaverse Working Group has the goal of "implementing standards-based, interoperable, interlinked virtual environments." Standards would be a solid first step, especially for higher education institutions that prefer not to purchase multiple different products that don't work together. "Interoperable" sounds wonderful!
The EDUCAUSE Library has a topic page on virtual and augmented reality for those interested.
Worry? Why Worry?
Indeed, why worry about these wonderful new technologies full of promise? They have multiple potential applications in higher education that could make administration and running the institution faster and more efficient, provide more data to planners to keep costs down and to faculty to improve student success, and support faculty in teaching using the most effective tools and pedagogy. I see lots to get excited about.
I also see lots to worry about. In obtaining data for planning and to support student success, for example, an institution risks invading student privacy in pursuit of its goals. The question "Who owns the data?" has not received satisfactory answers, much less consensus. And what about the relatively frequent breaches of data on campus systems? Highly detailed information about students, faculty, and staff could end up for sale on the dark web. Now that's a horror story for an administrator.
On the research side, where will you store the massive amounts of data collected? The cloud? I see some security issues there, too. Cloud storage also has serious issues raised by varying laws (such as the requirement that data on a country's citizens be stored in that country), the necessity of providing continuous access to the data, access control standards, and security. For example, state actors go after intellectual property through nefarious means in an attempt to gain an advantage in their competitiveness without doing the background work themselves. Hacking is faster, easier, and cheaper than doing the research a university work group has already completed. Again, if a disagreement arises between cloud storage provider and higher education customer, who owns the data when the holder of the data is not the same as the provider of the data?
Social media has become the poster child for misuse of technology, whether for influencing elections in democracies or bullying perceived "enemies" under cover of anonymity. Self-training artificial intelligence plays a major role here, moving faster than defense mechanisms can keep up with the false postings, nonexistent users, and inflammatory language. Much as higher education administrators might prefer to remain separate from social media, higher education has embraced it in pursuit of new students, communication with students about campus events, and solicitation of donations from loyal alumni. Like the rest of us, campuses really can't do without some presence on the different social media platforms.
OK, so there's risk involved with these new technologies. Extra work for the IT department, too. As faculty and administrators investigate new systems and approaches to make their jobs "better," whatever that means to the individual or group, the IT staff needs to understand the programs under consideration, determine risks and how to address them (if that's possible), and ensure the institution remains protected against data breaches, unwitting invasions of student privacy, and the possible damages from a catastrophic failure of the system, whether from a cyber attack or flooding from a hurricane. The more complex the system, the more for IT to investigate. And the most important question to answer is whether the new system does what it promises to do: save money, improve the user experience, produce statistically significant increases in retention and student success, or secure the campus network against failure. That's asking a lot of the IT department. To meet their mandate to obtain, support, and update technology across the institution, IT staff must keep up-to-date about new technologies and their pros and cons, and they must keep key administrators and faculty informed about the benefits and dangers of using these different technologies. Involvement in the high-level campus planning becomes vital when the risks can produce severe consequences.
Having looked at the dark side here, I admit that I am also excited about some of the technologies finding homes in classrooms and across campuses. (I have always loved reading and learning new things, which influenced my choice of career and my joy in it.) Personalized learning is my personal favorite — with concern about data collection and inherent biases in the algorithms; see Audrey Watters' "The Weaponization of Education Data" — along with customized and attentive advising. Speedy networks, ample data storage, and software programs that help faculty, students, and administrators work faster and more efficiently appeal to me at a basic level. I just wish these features had been available when I attended college, especially skill-based tutoring and customized advising support. Of course, I might have become an economist instead of an editor had I gone through graduate education in the past five years. So I don't regret any of the decisions that brought me to a career as an editor in higher education IT, as the rewards of working with fascinating projects and wonderfully enthusiastic and dedicated people has made this the best possible job for me — an acknowledgment I'm delighted to give as I come up on retirement.
Many editors share similar characteristics: an eye for detail, a sense for what might make an author's story more compelling, the ability to meet frequent deadlines, a certain reserve or even shyness, little need for public recognition, and satisfaction in working with a team of authors, editors, proofreaders, print and web designers, and printers to produce the final publication. My initial shyness in meeting lots of strangers began to ease at the IEEE Computer Society, where I met so many volunteers and conference attendees, a process that continued at EDUCAUSE. Even public speaking has become easier, beginning with a surprise request that the members of the IEEE Graphics and Applications Editorial Board (including me!) give a short extemporaneous talk at the Fraunhofer Institute of Computer Graphics Research in Darmstadt, Germany, during a tour of the new facility. Everyone else there was a computer graphics expert, from the board to the audience. I'm an editor — what could I possibly talk about? As it happened, technical advances in publishing gave me a solid basis for a short talk, especially as many of the technologies then coming out and predicted for the near future involved computer graphics and visualization, along with the concept of publishing entire magazines or books on the web. I'm happy to report that nobody fell asleep during my talk.
One of the most rewarding elements of my work at EDUCAUSE involves encouraging people to share their experience with their peers, whether through an article, a case study or tutorial, or a video or audio interview. This collaboration, sometimes with groups of authors, yielded some fascinating work, often featuring demos or multiple graphics illustrating the story, which the authors created during the development process. Best of all, it saved time and effort by guiding authors past common roadblocks to writing, such as deciding which concepts to include and the authorial tone to use. (EDUCAUSE and I frown on the "royal we" many academic authors assume for their personas.) I thoroughly enjoyed these collaborations, and they taught me to speak up in professional and personal situations to clarify expectations. Who knew an editor could learn public confidence from such an "invisible" task?
To all the authors and volunteers who have contributed, whether they contacted me or I encouraged them during conversations at conferences, thank you for making my work a joy. I certainly couldn't have done it without you.
Nancy Hays is editor and manager of Publishing at EDUCAUSE.
© 2017 Nancy Hays. The text of this article is licensed under Creative Commons BY-NC 4.0.