Blog Post

Identifying the Next Big Thing: Qual vs. Quant

Thomas A. Campbell, Ph.D., FutureGrasp, LLC & Matt Wolfe, Ph.D., Virginia Tech Applied Research Corporation (VT-ARC) • Jan 09, 2018
Like what you read? CONTACT FutureGrasp for more information, speaking engagements and/or collaboration

Throughout history, humans have remained obsessed with mechanisms for predicting the future. From oracles and auguries of the ancients, to modern day algorithms of Wall Street, we have trusted in diverse tools to foretell what might come next. Today, the union of technology and access to information may enable an unprecedented ability to quantitatively forecast the Next Big Thing in science and technology. However, qualitative tools—despite their many shortcomings—still dominate the domain.

In today’s fast-paced digital economy, a rapid and accurate prediction of science and technology (S&T) – “What will be the Next Big Thing?” – would be a powerful lever for investments, research and defense planning. One frequently sees lists of “Top 10 Technologies.” But for such a quantitative number, such lists are generated generally using qualitative approaches. Experts are convened and hold a meeting (a.k.a., a BOGSAT—bunch of guys sitting around a table), surveys are sent out on the internet to get the crowd’s opinion, and results are pooled into the technologies that supposedly will dominate in the coming year. While certainly offering some level of insight, such qualitative approaches should be recognized for what they are: inherently limited.

CHALLENGES TO QUALITATIVE APPROACHES

Several challenges exist with a purely qualitative approach for S&T analysis.

Information Overload. Even if one gathers hundreds of experts in a scientific discipline and accumulates thousands of votes on the internet, it will still be impossible to truly absorb all the information in S&T, or even in a given field. The amount of information produced globally is now tremendous: every day we create 2.5 quintillion bytes of new data [1] , and every year there are roughly 2.5 million journal articles published [as assessed from a 2014 study]. [2] No one expert, or even group of experts, can absorb all this information, contextualize it, and make substantive predictions from it.

Specialization. As a ‘new’ field becomes more mainstream and grows, it tends to splinter into specialty fields, thus challenging anyone to remain a comprehensive expert. For example, 3D-printing used to be a little-known subset of mechanical engineering. Capabilities were limited primarily to cheap plastics and a handful of commercial printers. But as more researchers got involved and advances were made, the availability to 3D-print exploded into hundreds of materials and systems. Now almost any material, including limited types of food and organs, can be 3D-printed. If one was considered an expert in the early days of 3D-printing and able to answer any question on it, it became very difficult, if not impossible, to do so as the field expanded. [3] Nowadays, experts are specialized in niche areas across most S&T disciplines – e.g., nanotechnology is now splintered into carbon nanotech, quantum dots, bio-nanotech, etc. In short, in the nascent days of an S&T space, one might realistically claim to be an omniscient expert, whereas as the field matures one cannot. The BOGSAT requires many more people and a really big table as a field expands to possess all-encompassing expertise.

Convergences. S&T fields tend to influence each other in unexpected ways. One could posit that the low-hanging fruit in silo’d S&T specialties has already been plucked, and that to get to the high-hanging fruit one must get a ladder that can converge disparate S&T branches together. Take, for example, artificial intelligence (AI), specifically deep learning. In repeated cycles of hype and disappointment, AI went through two or more ‘winters’ in which funding dried up. In the early days of AI, a core limiting aspect was computational power. Most recently, researchers have leveraged graphics processing units (GPUs), which were originally designed for gaming, to execute the massive computations required for deep learning. By doing so, AI has accelerated well beyond its origins from Alan Turing in the 1950s. Thus, gaming chips enabled AI, which enabled new approaches to computation, and did so in an unexpected manner.

Emergences. New discoveries are often the result of serendipity, where chance favors the prepared mind. However, the germination of a new discovery or invention may be completely missed, no matter how much of an expert one is. Much of research is unpublished. A lab book note, a university seminar, a small workshop – these are the only inklings one might encounter that something new is on the cusp of being debuted. In their excellent book, A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution, authors Doudna and Sternberg [4] detail the earliest days of CRISPR [5] research that presaged the incredible levels of investment and excitement now prominent in the biotechnology community. Only a handful of researchers were involved in the early stages of CRISPR. Unless one is truly in the know , such knowledge is missed until it goes mainstream.

Non-Technical Influencers. Context is important and never more so than today where data, information (and disinformation), and judgements are readily shared and propagated globally. Consequently, the perception of S&T held by its developers, financiers, customers, and even the general public can greatly influence the evolution and acceptance of an innovation. For example, the view of a technology such as genetically modified organisms, GMOs, varies with the culture and economy of different regions of the world, and changes temporally as the technology matures, is marketed, and its perceived successes and failures are shared. Thus, it has become critical to consider non-technical influences, such as market dynamics, when assessing the S&T outlook. Such factors and relationships are far too complex for an individual or team to interpret.

FROM QUAL TO QUANT

So what can we do to better grapple with the exponentially growing S&T space? How can we identify the subtle ripples in the S&T community that might signal a profound discovery? Can we begin to understand the impact of the non-science community on the financing, development and acceptance of S&T and its products and services derivatives?

To contend with these issues, we should leverage a more quantitative approach to identifying, tracking, forecasting and assessing S&T. Humans should still remain in the loop, but our abilities can be augmented beyond our limited organic framework. We often have a hard time exchanging information among ourselves (we can all recall death-by-PowerPoint meetings); we must sleep and eat; and we enjoy time for entertainment, family and friends. Computers are excellent networkers and need no such organic supports.

There is a wealth of open source information available for the properly coded algorithm. The figure above shows several examples of technical influencers upon S&T knowledge. There are three main technical sources of information on S&T: the internet, publications and meetings. Although not claiming to be comprehensive, multiple examples of each are listed in the sidebar tables. Within each row, one might also consider additional information such as authors, citations and number and origins of people attending meetings. To construct a comprehensive dataset on even a sub-field within S&T would require substantial compute and memory storage.

Ultimately, the ability to make meaning of such data is hampered by many factors. An unproven algorithm, disparate data, lack of data, misinformation, fake news - all can obfuscate and compromise a prediction’s accuracy, or even its proximity to any reality at all. Such issues must be considered to produce a rigorous quantitative tool. To the authors’ knowledge, no organization has been able to fully leverage all the available information we now possess on S&T with a quantitative approach. It is certainly worthwhile to develop such a program.

A PRESAGE APPROACH

In 2011, the Intelligence Advanced Research Projects Activity (IARPA) created the Open Source Indicators (OSI) program that challenged technology developers to create a capability that harvests open source datasets to make predictions, or otherwise forecast significant societal events before they occur. [6] The leading technology and winner of the OSI challenge, now offered by VT-ARC commercially as PreSage, was developed by Virginia Tech and serves as a base platform for several other predictive analytics challenges. Could such a tool provide the quantitative means to overcome the challenges faced by qualitative approaches to S&T analytics?

PreSage—or other automated systems of big data collection, fusion and analysis employing natural language processing (NLP), topic analysis, clustering, and other machine learning techniques—has the potential to significantly improve S&T analytics by augmenting the human analyst’s work and providing quantitative insights where previously qualitative approaches dominated. For example, an automated, AI-driven tool could:

·Rapidly ingest and process very large volumes of data in near real-time thereby eliminating the information overload encountered by analysts tracking S&T,

·Track and maintain a deep understanding of technical fields even as they become more specialized ,

·Identify capabilities and shortfalls from seemingly unlinked S&T domains where convergence can create opportunity,

·Detect early indicators of S&T emergence such that decisions can be made proactively and with confidence, and

·Construct a holistic view of S&T inclusive of non-technical information that nevertheless shapes the evolution and progression of technology.

PreSage has already demonstrated the ability to ingest and process a broad range of publicly available data such as that shown in the figure; track and maintain large volumes of data, including streaming data; and fuse disparate datasets into a cohesive prediction. What’s necessary next is the development of appropriate models to describe the variables and relationships specific to S&T, integrate them with non-technical indicators, and identify training and test datasets—a non-trivial, yet achievable challenge. VT-ARC and FutureGrasp seek partners in this exciting journey to construct quantitative models for better forecasting and assessments of S&T.

CONCLUSIONS

What then, is to come? If a successful S&T prediction program is realized, how might it be used to assist the scientist? Influence the investor? Inform the marketer? Defeat the adversary?

As the power and utility of AI-driven products and services continue to grow rapidly, AI itself should be leveraged to quantitatively forecast the progression of S&T more broadly. The predictive nature of tools such as PreSage offers to provide not only greater insight into which technologies will deliver greater capability in years to come, but also those that will incite investment and consumer interest, and lead to commercial success.

Ultimately, the production of information will never cease growing. Decisions need to be made more expediently and with greater rigor. A more quantitative approach toward S&T analysis would benefit anyone interested in trying to identify the Next Big Thing.

ACKNOWLEDGEMENTS

The authors gratefully acknowledge Luke Sebby of VT-ARC for his review of this note.


NOTES


[3] One author [TAC] can attest to this first-hand, as he researched 3D-printing at Virginia Tech from 2007 to 2013.

[4] A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution, Jennifer A. Doudna,‎ Samuel H. Sternberg, 2017, Houghton Mifflin Harcourt Publishing Company.

[5] CRISPR is the acronym for “Clustered Regularly Interspaced Short Palindromic Repeats,” which refers to short segments of DNA, the molecule that carries genetic instructions for all living organisms. A few years ago, the discovery was made that one can apply CRISPR with a set of enzymes that accelerate or catalyze chemical reactions in order to modify specific DNA sequences. This capability is revolutionizing biological research, accelerating the rate at which biotech applications are developed to address medical, health, industrial, environmental, and agricultural challenges, while also posing significant ethical and security questions.

[6] The technology described, developed by Prof. Naren Ramakrishnan of Virginia Tech, utilizes open source data including tweets, Facebook pages, news articles, blog posts, Google search volume, Wikipedia, meteorological data, economic and financial indicators, coded event data, online restaurant reservations (OpenTable), and satellite imagery. Called EMBERS, the technology successfully forecasted events such as the “Brazilian Spring” (June 2013), Hantavirus outbreaks in Argentina and Chile (2013), and violent protests led by Venezuelan students (Feb 2014). The EMBERS technology has since successfully transitioned to the US Government.

Like what you read? CONTACT FutureGrasp for more information, speaking engagements and/or collaboration
Diplomacy turmoil in age of coronavirus
By Thomas A. Campbell, Andrew Hyde, Geoffrey M. Odlum, Ariel Ahram 11 Jun, 2020
The coronavirus pandemic has utterly transformed international diplomacy. Statecraft has long been driven by face-to-face interactions that expand through informal mechanisms, such as body language, spontaneous conversations, and an understanding of the views of other actors beyond rote statements of positions. Such meetings are now difficult if not impossible in the era of social distancing and work from home mandates. Meetings postponed or canceled altogether have slowed a range of negotiations while many important international conferences have also been delayed or gone virtual. Cybersecurity risks have amplified as unsecure networks are leveraged for conversations that would have normally occurred face to face. Diplomats must find new ways to work in this pandemic era, as well as what the current turmoil in operations may mean post-COVID19. The practice of diplomacy has always been intertwined with state-of-the-art methods of information and communications technology (ICT). For example, telegraphy revolutionized the practices of diplomacy by accelerating the speed of communication between embassies and the metropole. The internet has had similar revolutionary impact on the practices of statecraft and diplomacy. Web-enabled ICT multiplied the number of voices and interests involved in international policymaking; complicated international decision-making; reduced exclusive control of States in the process; accelerated and freed the dissemination of information, accurate or not, about any issue or event; and enabled traditional diplomatic services to be delivered faster and more cost-effectively, both to ones’ own citizens and government, and to those of other countries. Following the development of social media, multiple government agencies are now avid posters on Twitter and other platforms in their attempts to shape public discourse at home and abroad. _________ TURN TO TURMOIL The novel coronavirus has imposed suddenly a full-court-press for diplomats to take to internet ICT, but accompanying this rapid change are new communications stresses and nuances. The functionalities and efficiencies of diplomatic ICT - including phone, email, social media messaging, and virtual meetings – are now routinely in question. The limitations of video-conferencing and other ICT-driven modes of communication have long been documented . U.S. Foreign Service Officers (FSOs) are specifically trained in face-to-face interactions and talking to groups. Exclusive reliance upon teleconference platforms such as Zoom and WebEx affect multiple core capabilities of the FSO. It is difficult to detect and project desired body language, thus introducing challenges in convincing foreign interlocutors to share private thoughts and intentions over a monitor. Negotiations are both enabled and constrained online, thus pushing the dialogue in sometimes unexpected directions. Reduced or no travel, fewer in-person meetings, and overall, limited personal interactions compromise statecraft. Video engagement changes how information and intelligence is collected, thus impacting institutional memory. Diplomatic activity and initiatives have slowed considerably as many major meetings and conferences have gone online, with others outright canceled. US Government agencies have suspended many pre-pandemic priorities and pivoted almost entirely to focusing now on coping with the virus. Security in the White House has increased with individuals working near POTUS being given daily coronavirus checks , thus self-limiting the frequency of high-level meetings. Intelligence officers at the CIA are working shifts , such as three days on followed by three days off, to improve social distancing by reducing the given number of people in buildings at any given time. In New York City, itself a pandemic epicenter, all aspects of life have been severely compromised for over a month now, detrimentally affecting United Nations Headquarters operations and activities. For example, COVID-19 has prompted changes in the working methods of the UN General Assembly : “Physical distancing and stay-at-home restrictions mean representatives can no longer meet in person, including to vote on resolutions, which are now circulated under a process known as ‘silence procedure.’ Ambassadors are given a 72-hour window to consult their capitals. If they all agree on a resolution, it is passed. If not, the resolution is not adopted as the ‘silence’ has been broken.” Webinars and online meetings are the new norm, but they impact the large-scale and often slow-moving collaborative work of drafting and editing of diplomatic agreements and statements. These processes now often involve dozens of people all working remotely using a "track changes" mode of interaction. But direct contact and interaction often undergird these collaborations, as interlocutors know each other and can converse face to face before they move into the web environment. Going straight to the web without initial constructive in-person discussions in the best case merely compromises the speed of production; in the worst case it stymies it entirely. Cybersecurity risks increase when executing all diplomacy online. A singular reliance on ICT raises new questions about the reliability and vulnerability of networks to disruption or espionage; unpreparedness opens new cyber-attack vectors for hackers, both amateurs and State-sponsored professionals. Governments may not have the requisite cyberinfrastructure and trained employees to seamlessly continue diplomacy operations from home offices. Sharing of sensitive documents either cannot be done at all, or only in a limited manner. Diplomatic online sessions can be recorded and played back with deep analyses of verbatim words and body language – normally hurried notes taking and brief summarizations are what constitute the record for face-to-face conversations. Since different organizations and different countries can adopt disparate and non-interoperable technological approaches, ICT compatibility confusion throws additional barriers at smooth statecraft. Some diplomats may avoid negotiations altogether because of security concerns, thus compromising statecraft progress in general. _____________ ACCESSING THE ADVANTAGE State engagements must continue apace regardless of the pandemic. Negotiations of peace, non-virus global health issues, human rights, climate change, nuclear nonproliferation, and trade all must occur, in whatever limited way. Embassies need to find ways to provide emergency consular services to visiting or expatriate citizens in need without putting diplomats at risk of infection. Diplomats face challenges in assessing emerging political, economic, and social trends when prevented from face-to-face meetings or travel around a country. National elections scheduled during this period of social distancing may take place without the confidence-building presence of foreign election observers, increasing potential for corrupt or anti-democratic practices to alter election outcomes. Diplomatic professionals will have to consider how the new online reality impacts their ability to advance their national interests, to understand rapidly shifting geopolitical events, and to reach publics overseas. Key is to understand how diplomacy morphs into a new normal in the coming months and years, and to identify means that diplomats and senior policymakers can prepare. Tactical challenges are to identify best ways of doing business and to implement them in real time during the pandemic. Strategic challenges are to consider how diplomacy following the pandemic might inalterably change, what new opportunities are present, and how to position diplomats accordingly. Advantages and opportunities exist in this new online reality, nevertheless. Frictionless meeting logistics with accompanying financial savings for internet-only engagements; increased opportunities to hear directly from key players; younger diplomats shining with their greater digital skills – all these and more are coming to light in the pandemic era. As stated by the United Nations’ Under-Secretary General for Political and Peacebuilding Affairs, Rosemary DiCarlo , “Although we recognize that the limitations of the processes in which face-to-face meetings are restricted, the increased use of technology has the potential to create new opportunities [and] enhance the inclusivity of peace processes - for example, including the participation of women and young people.” Concomitant with this new reality is the increased potential to leverage big data, predictive analytics, and other digital technologies. Investments should be amplified in artificial intelligence (AI) to analyze all data that can be extracted on any given country, region, or topic. Predictive analytics using machine learning (and its subset, deep learning) can then be applied to offer novel insights for meeting preparedness and briefings to senior policymakers; such approaches offer new opportunities in optimized human-machine hybrid decision-making. AI can also be used to automatically assess treaty compliance, as well as to monitor and predict events before they occur . In-situ machine language translations and real-time content assessment for both verbal and written communiques are feasible now. Increased security of voting processes via blockchain holds great potential. Virtual reality avatars can be applied to replicate the feeling of sitting across a table from a counterpart. Ultimately, there is a wide range of technological applications and solutions that await adoption and deployment by forward-leaning diplomatic corps. However long the pandemic lasts, diplomacy has been forever changed. How diplomats respond to the recent step-change in personal interactions and resultant altered statecraft will dictate whether national objectives are met, whether treaties are negotiated, and whether normal operations return - or diplomacy turmoil continues. The opinions and characterizations in this piece are those of the authors and do not necessarily represent those of the U.S. Government. Thomas A. Campbell, Ph.D. (tom.campbell@futuregrasp.com) is Founder & CEO of FutureGrasp, an advisory group that works with organizations globally to identify policy and business opportunities from emerging and disruptive technologies, especially artificial intelligence. Previously, he was the first National Intelligence Officer for Technology with the National Intelligence Council, Office of the Director of National Intelligence (ODNI). Andrew Hyde (andrew.hyde@futuregrasp.com) is a Nonresident Fellow at the Stimson Center, a Washington, D.C. based think tank, with the Transforming Conflict and Governance Program, and a Senior Advisor with FutureGrasp. Previously, he was a Foreign Service Officer at the U.S. Department of State and a Congressional staffer. Geoffrey Odlum (geoffrey.odlum@futuregrasp.com) served as a Foreign Service Officer at the U.S. Department of State from 1989 to 2017. He is currently the President of Odlum Global Strategies, a national security and technology policy consulting firm, and is a Senior Advisor with FutureGrasp. Ariel I. Ahram, Ph.D. (ahram@vt.edu) is Associate Professor and Chair of Government and International Affairs (GIA) at the School of Public and International Affairs, Virginia Tech.
By Thomas A. Campbell, Ph.D. 04 Apr, 2019
It is the position of FutureGrasp, LLC that every nation state should have an AI national plan. To remain economically competitive internally, as well as to facilitate collaboration across borders, it is imperative that governments recognize the need to channel whatever resources they possess to produce plans that enable strategic directions in the rapidly growing technology sector of AI.
By Tom Campbell, PhD; Founder & President; FutureGrasp, LLC 07 Sep, 2018
As we move into future years with more advanced compute and exponential data availability, there will be an inexorable push to simulate more of our world. Populating our digital worlds with digital twins and digital doubles will ultimately serve us greatly in our increasingly complex real worlds.
By Tom Campbell, PhD; Founder & President; FutureGrasp, LLC 10 Aug, 2018
It is often said that history repeats itself—we are seeing these same trends today with autonomous weapons.
By Tom Campbell, PhD, Founder & President, FutureGrasp, LLC 20 Jul, 2018
We offer here a review of several facets of artificial intelligence (AI) that could benefit or compromise law enforcement (LE). As AI becomes more advanced, the potential for its application by LE experts (and criminals) will only increase. Let us hope that LE officers in the field and analysts in the office are able to leverage AI to protect people around the world.
By Tom Campbell, Founder & President, FutureGrasp, LLC 19 Jun, 2018
Voice is all the buzz (no pun intended) in Silicon Valley. Many technology leaders (Apple, Amazon, Microsoft, etc.) are betting that verbal commands will soon be the major means of communication for our numerous devices. But is verbal chatter the terminal point for our computer communications?
By Thomas A. Campbell, Ph.D., FutureGrasp, LLC 04 May, 2018
Quantum supremacy, the point at which a quantum computer exceeds the capabilities of any classical computer , is now near at hand. I explain why and what that means for computer science.
By Thomas A. Campbell, Ph.D., FutureGrasp, LLC & Robert Meagley, Ph.D., nR, LLC; ONE Nanotechnologies LLC 08 Feb, 2018
We review the state of the art in advanced semiconductors as relevant to artificial intelligence (AI).
By Thomas A. Campbell, PhD, FutureGrasp, LLC & Robert Meagley, PhD, nR, LLC 02 Feb, 2018
We review the state of the art in advanced compute architectures as relevant to artificial intelligence (AI).
By Tom Campbell 04 Dec, 2017
The only way we can survive and thrive in this constantly changing and increasingly complex world is to get assistance from our machine creations. Artificial intelligence (AI) must be a core aspect of that help, as it offers unprecedented capabilities to monitor, control and assess a wide range of situations. AI will not (at least in the near term) remove the human from the loop; instead, it will augment our capacities for better data collection, analysis and decision making.
Show More
Share by: