Ed Tech as Applied Research: A Framework in Seven Hypotheses

min read

Key Takeaways

  • Seven hypotheses explore the feasibility of educational technology — typically considered as supporting teaching and learning — as applied research by providing an initial framework built on traditional research processes.
  • New knowledge results from research, usually about fundamental questions, but ed tech pursues applied research about practical problems using qualitative and quantitative methods and local standards.
  • Seeing ed tech as research emphasizes the collaborative nature of our work by helping shape our conversations about the knowledge we create, its standards, and its methods.

Edward R. O'Neill, Senior Instructional Designer, Yale University

Those of us who provide educational technology services may see ourselves in many different  frameworks: as IT service providers, as resources to be allocated, as technology experts, as support staff — to name only a few. I might be troubleshooting, tinkering, keeping clients happy, performing one of my job duties, or solving problems, easy and hard.

We may also see ourselves as part of human services more broadly, supporting human agency and growth. However we think about this, we support the university’s mission — and by extension some of our culture’s loftiest goals. Teachers teach, learners learn, and educational technologists support their teaching and learning: the dissemination of knowledge to students. But what of knowledge's creation and dissemination beyond our institutions' walls? What relation does ed tech bear to research and publication?

Much has been said and written about ed tech as a service. The framework offered here sees ed tech as scholarship and draws practical consequences from this way of seeing. To explore the aptness of this framework, I offer the following seven hypotheses.

1. Ed tech supports and replicates the university's mission, using the methods characteristic of scholarship in general and research in particular.

We who work in educational technology support the university's mission to preserve, create, and disseminate knowledge. The university disseminates knowledge through a range of activities from publication to teaching, and ed tech's special role is to support the teaching part, although no hard line separates sharing knowledge with scholars, students, and the general public.

To support this mission, ed tech must pursue its own, smaller, practical version of the university's mission.

  • We must gather and keep, discover and share knowledge about educational technology so that we can recommend the right tool for each task and support these tools effectively and efficiently.
  • Like other forms of research, we must do this transparently, using standards that evolve through discussion and experience.
  • Even supporting the tools we recommend involves disseminating knowledge: helping faculty and students learn to use these tools (or others of their choosing) is itself a kind of teaching.

In short, ed tech is research and teaching of an applied and local sort.

Our knowledge in ed tech is practical: it aims to solve immediate problems. One key practical problem we face is, What is the best tool to achieve a specific goal? (Ed tech fits means to ends.) Answering this kind of question is perfectly feasible using methods native to higher education. When we hew to the methods and values of higher ed, we draw closer to those we serve: the faculty. We come to understand faculty better, as they do us, by sharing common values, methods, and habits of mind and of doing — we form a community.

In higher ed, new knowledge results from a process called research. But where scientists and scholars in higher ed usually pursue basic research about fundamental questions using precise methods and widely shared disciplinary standards, in educational technology, we pursue applied research about practical problems using a variety of general methods and standards that are on the one hand professional and on the other hand local and institutional.

Said differently, educational technology is applied research using mixed methods and local standards.

2. Ed tech work fits into the three phases of the research process.

Our ways of creating ed tech knowledge map well onto the methods used in higher education for research. Broadly construed, research involves three main phases: exploring, defining, and testing. The testing process is often construed as hypothesis testing. Whether they're scientists or humanists, scholars explore a problem, define its terms, and then develop testable hypotheses. When we in ed tech know the phase of research we have reached and what hypotheses we want to test, it's easier to track our progress, regularize our work, and target an explicit and shared set of standards –– all things we must do to be effective, efficient, and transparent members of our communities.

An example is helpful. In sociology, Erving Goffman either created or inspired a new approach (variously called the dramaturgical approach, ethnomethodology, conversational analysis, or discourse analysis) by the way he explored, defined, and tested a specific phenomenon.1

  • Exploring. Goffman was curious about why people interact the way they do. Why do they talk this way or that, wear these clothes or those in specific contexts, such as a rural village or a psychiatric hospital?
  • Defining. Goffman saw these questions as problems of "social order": how do people organize their actions and interactions? Goffman extended this traditional area in sociology to the micro-level of small gestures and ways of speaking. He defined his research as answering the question "How is social order maintained?"
  • Testing and Hypothesis Testing. For the purposes of testing, Goffman collected observational data about face-to-face social interactions and behavioral reports from memoirs and newspapers. He also recorded conversations and analyzed the transcripts. Goffman developed specific explanations (his hypotheses), and then "tested" them against his collected observations, reports, and transcripts.

In ed tech, we also explore, define, and test. Our explorations revolve around practices, however.

  • Exploring. What tools are people using? For what purposes? With what results? Here and elsewhere?
  • Defining. What kinds of tools and functions are involved? What kinds of purposes? For example, is videoconferencing good for collaboration? For assessment? For rehearsal and feedback?

For these two phases of research we draw on the work of our colleagues at other institutions, as well as that of researchers in the areas of education, psychology, organizational behavior, and more.

When it comes to tools and learning processes, our categories need to be useful and shareable, parsimonious and rigorous without becoming abstract. We can't speak our own private language, nor a recondite professional jargon. We need few enough categories to avoid being overwhelmed, and we need clear lines to avoid becoming theological.

  • Testing. Finally, we need to test the tools and verify that they work well enough for the purposes at hand. This requires asking, does this tool do what people say? Reliably? Is it usable enough to hand off to instructors and students? Is it so complex that the support time will eat us alive? What evidence do we have, and how sure are we of that evidence?

Calling what we do "research" does not imply we need a controlled double-blind study to confirm what we know: We need to be sure that we are effective and that the degree of certainty escalates with the investment of time, labor, and money.

From another angle, ed tech as research is akin to scientific teaching as a process that constantly tests its own effectiveness as a kind of hypothesis testing. Ed tech activity of this kind will also support scientific teaching better because it not only "walks the walk," it makes us better at practical hypothesis testing.

3. Ed tech can use familiar research methods as well as a broad understanding of learning.

Our methods are both qualitative and quantitative — in a word, mixed. We collect both numbers and descriptions. Our toolkit should encompass a range, including these basic tools:

  • Literature review: reading research and publications about others' practices, experiences, and results
  • Interviews: calling, e-mailing, and chatting with faculty and students
  • Statistics: counting what we do, hear, and see
  • Field experiments: testing of a tool by professors to see how it works for them
  • Equipment calibration and testing: using the tool ourselves to see if it works to our standards
  • Natural experiments: finding similar situations that use different approaches to create a semi-controlled experiment
  • Observational studies: watching teachers and learners at work to discover what happens with interesting tools "in the wild"
  • Ethnographies: interviewing users and capturing the data as audio, video, or notes in order to get a rich account of the experience of teaching or learning with a specific tool
  • Semi-controlled experiments: finding two similar classes or sections, one using a tool and the other not (or using a different tool)
  • Meta-analyses: comparing the results of disparate research studies or our own observations

Some research methods fit better in one phase of the research process than do others. Each phase also has its key activities: verbs that further specify the acts of exploration and definition.

Phase

Activities

Research Methods

EXPLORE

Observe

Collect

Summarize

Record

Lit review

Interview

Field experiments

Natural experiment

Observational studies

DEFINE

Characterize

Analyze

Categorize

Lit review

Debate

Professional norms

TEST

Check

Verify

Measure

Correlate

Compare

Testing

Observational studies

Interviews

Controlled experiments

Since our research is about how tools fit purposes, we need some notions about the purpose of ed tech: supporting learning. Without committing to one specific theory of learning, we can specify four elements that help define semi-agnostically how we see learning: what it is, how it unfolds in time, and big and small ways to enable it.

  • Basic definitions. What is learning itself? What are the main frameworks that have been used effectively to understand and support learning? E.g., goal orientation, motivation, working memory.
  • Process elements. What are the important moments in the instructional process? E.g., defining the learning objective, the student practicing or rehearsing and getting feedback, assessing whether the student has learned, etc.
  • Whole strategies. What instructional strategies have been found effective? E.g., authentic learning, problem-based learning, inquiry-based learning.
  • Valued supportive behaviors. What activities that are deeply valued in the context of higher education support learning? E.g., collaboration, writing, dialogue, and debate.

4. Knowing the phases of the research process lets us share where we are in that process for any given set of tools.

As we progress from exploring tools to defining their uses and testing them, we constantly gather and share our knowledge. Ideally, there is no single moment at which we suddenly need to know something precise without any background whatever. Instead, we will be most successful when we collect and share our knowledge gradually as we go, tracking where we are in the exploring, defining, testing process for each category of tools and purposes. If we do not know at any given moment what tools are used to support collaboration or assessment and how well they work for that purpose, then we will have a much harder time testing tools according to our own standards — which need to be explicit from the start, as well as constantly refined.

For each kind of tool and purpose, we should know which phase of the process of research (exploration, definition, testing) we have done, are in, and need to do. A robust system would map all our important dimensions against each other: phases, purposes, and tools. E.g.,

"These are the tools we're exploring, those are the tools we are testing, and here are the purposes we think they're good for."

"These are tools we have tested and that meet our standards, and here are the purposes we see them as fit for, along with our methods and evidence."

Not all tools are neatly focused toward specific ends. Some tools will likely be so basic that their purpose is merely utilitarian or back-end, such as file sharing, sending messages, or social interaction. Some megatools enfold many functions, such as the LMS, blogging platforms, and content management systems. These megatools support broadly valued activities in higher education. Ed tech knowledge about how tools fit purposes thus has a definite shape.

Questions

Category

What is it?

Tools

What does it do?

Functions: live synchronous communication, multimedia production, etc.

Why would I need it?

Purposes: specific teaching and learning, and more general activities of organizing and communicating

Does it work?

Research: testing and the evidence we find and collect ourselves

How do I get it?

Availability

Who supports it?
How do I get started?
How do I get more help?

Support

The structure of our knowledge makes that knowledge amenable to gathering and sharing by various methods. But a good tool for managing this information would go a long way, and being experts in fitting tools to purposes, we will likely conclude that the tool that lets us track and share our knowledge is a robust but elegant content management system (CMS). Such a system would support a narrow range of utterances:

"This is a tool we are exploring/testing/have tested for this or that learning purpose." Tool, phase, and purpose.

"This is information we have found about a specific tool from a specific research method: interview, lit review, observation, etc." Data, research method, tool, and purpose.

"Here is a reference to a specific tool we plan to explore, along with possible purposes."

Such a system would work as a kind of dashboard. It would also allow us to track our work internally and simultaneously publish whatever elements we are ready to share. Moving knowledge from system to system is inefficient, however, and when support comes first, sharing our knowledge will always take a back seat. Therefore the knowledge management system and the knowledge sharing system would ideally be one and the same.

Indeed, if efficiency is a paramount concern (as it should be), and the relevant resources are far-flung, then it would be easier to point to them than to gather them together — a bibliography, not an encyclopedia. Given that tools like delicious.com, Tumblr, Zotero, Instapaper, and Evernote, to name a few, point to information elsewhere or gather disparate resources in a single location, it's also possible that the CMS is overkill for ed tech knowledge tracking.

5. Ed tech verifies hypotheses of a few definite types.

In ed tech, our hypotheses have typical forms. For example:

  • Tool T
    • has functions F(1…n)
    • works well enough to be recommended and supported according to standards S(1…n), and
    • supports purposes P(1…n).

Our work on the first level is verification. We assure ourselves that Tool T really has the functions claimed. Hypotheses start out basic and increase in complexity of the evaluative criteria we use. There is no point in testing the functions of a tool that does not work on the platforms you need it to run on. What operating systems are supported? Functions required must then be identified for testing on the relevant operating systems.

Our work on the next two levels is (1) discovery and (2) verification and evaluation. We find out how well the tool works, verify existing claims, and evaluate the facts we gather based on our local framework of needs and standards.

6. Well-formed ed tech recommendations are sourced, supported, and qualified.

Recommendations will be more compelling when they carry with them the sources of our authority: our research methods and evidence. Recommendations cannot be unqualified, however; there are always caveats.

Here it seems wise to label the tools we recommend in terms of the level of support faculty and students can expect. Three levels seem crucial.

  • This is a tool or service our institution bought or made. We support it, and the vendor has promised a certain level of support. Changes can be made as needed based on institutional priorities.
  • This is a tool or service our institution pays for. The vendor supports it, but we do not. Changes can only be made by the vendor on their own timeframe.
  • This is a tool or service our institution does not buy or pay for. There is no promise of support. Use should be considered at your own risk, except that we evaluate all recommended tools for certain basic needs, such as security and data portability.

7. Piloting is a kind of testing and evaluation based on strategic criteria.

A pilot is one kind of testing. But piloting typically only happens when a decision is to be made about the relative value of buying or supporting a specific tool for a specific purpose. When we decide to recommend and support a tool, even given all the proper caveats, such a recommendation comes with a cost: even if no money is spent, time and therefore money are used up. When such expenditures, whether in labor or dollars, rise above a certain threshold, a special kind of evaluation is needed. The criteria of such an evaluation are largely strategic:

  • What is the potential impact in depth and in breadth?
  • How innovative is the tool or activity?
  • Does it fit with our strategic goals?
  • Does it meet our standards of data integrity, security, portability, etc.?
  • How easy or difficult is it to support? How labor-intensive?
  • Does it fit with our infrastructure?

The hypothesis involved has a characteristic form: "Tool T meets our standards for recommendation and support." The form is trivial; the devil is in the standards.

One possible set of standards follows; bigger and smaller schools will have different values when counting impact.

Dimension/
Measure

Depth of Impact

Breadth of Impact

Level of Innovation

Alignment with Goals

Attention Likely

Strong

Impacts over 1,000 students in a single academic year.

Benefits an entire school or program.

Represents a quantum leap for us and puts us at the top of our peer institutions.

Aligns with at least two strategic goals at three different levels: the profession, the university, our unit, the relevant schools and departments

Likely to attract positive attention.

Moderate

Impacts over 100 and under 1,000 students in a single academic year.

Benefits several professors or a department.

Represents an incremental advance or brings us up to the level of our peers.

Aligns with at least two strategic goals at two different levels: the profession, the university, our unit, the relevant schools and departments.

Likely to attract mixed attention.

Weak

Impacts under 100 students in a single academic year.

Benefits one professor.

Represents the status quo for our institution.

Does not clearly align with any of the relevant strategic goals: the profession, the university, our unit, the relevant schools and departments.

Likely to attract negative attention

To what extent breadth of impact, say, outweighs potential negative attention is something to decide in practice. Having a clear set of standards absolves no-one from making judgment calls. After enough evaluations have been completed using an instrument like this one, it should be possible to set the borderlines more clearly and even to weight the factors so that, for instance, insufficient documentation is a deal-breaker and possible negative attention is merely a nuisance — or vice-versa.

Concluding Observations

The verification of these hypotheses can only come through practice. To the extent that any part of them cannot be verified, that part needs to be thrown away and the hypothesis adjusted accordingly. The preceding is not just a framework and hypotheses: as research, it's a call to a community to share and discuss results, evidence, and methods. Although our work is always local, case-based reasoning will suggest analogies even for those whose work seems on the surface far-flung. Research is future-oriented and remains forever open. I offer this framework in that spirit.

Acknowledgments

My thanks go to my supervisor at Yale University, Edward Kairiss. He asked me to reflect on what a “pilot” was, and when I found that I needed to step back and get a wider view, not only did he not balk, he encouraged me. What is written here would not exist without his encouragement and support. Additional thanks go to David Hirsch, whose organizational work at Yale provided a model of what the reflective practitioner can accomplish.

Note

  1. Peter Chilson, "The Border," in The Best American Travel Writing 2008, ed. Anthony Bourdain (Boston: Houghton Mifflin Company, 2008), 44–51; Michael Hviid Jacobsen, "Reading Goffman 'Forward,'" in The Social Thought of Erving Goffman, eds. Michael Hviid Jacobsen and Soren Kristiansen (London: Sage, 2014), 147–159; and Emanuel A. Schlegloff, "Goffman and the Analysis of Conversation," in Erving Goffman: Exploring the Interaction Order, eds. Paul Drew and Anthony Wootton (Cambridge, UK: Polity Press, 1988), 89–93.

© 2015 Edward R. O'Neill. The text of this EDUCAUSE Review article is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 license.