I don't like using the term "consumer" because it implies an economic function of the searcher.  There is certainly an economy (exchange of value) in the Searchable Web Ecosystem but "consumers" are really "searchers".  I say "consumers" because people identify with that word more readily than they do with the word "searcher".  A "consumer" is someone like you and me: we consume things.  A "searcher" is someone who is statistically measured in a more clinical environment.
Well, there you have it: an arbitrary and artificial distinction between "consumer" and "searcher" in an informational context about Web search.  That kind of granular differentiation doesn't matter much today but in a few years it may be the determining factor that leads a search engine to include or exclude this article from the search results for a group of queries.  These queries will differ from most queries today because they are computational.

That's not the best word to use as a label because we already have computational queries in the form of "10^2 = " and "21*19273.28383=".  You can type these types of computational queries into Bing and Google and they will compute the answers for you.  Maybe it's not too late to call them calculator queries but how would you distinguish those from a query on Wolfram Alpha such as "number of US citizens over 21"?

The last example I provide produces an unsatisfactory result on Wolfram Alpha.  Google doesn't even attempt to answer the question; rather, it points you to Web documents loaded with data (none of which seem to have the answer).

In fact, the question as I typed it above is worded incorrectly.  What we would really want to know is "number of US citizens age 21 and older", but more people will ask "over 21" (meaning "21 and older") than will type "21 and older").  Technically, if a search engine ever tries to tell us how many citizens are "over 21" it will exclude all the 21-year-olds due to naive literalness (until someone points out the discrepancy between the idiomatic expression and the literal meaning of the phrase).

On the other hand there are legitimate queries that would need to exclude people aged 21, such as when you are computing percentiles.  The context of the query is not obvious in the query but it can be provided by either the set of queries used in the same search session or by the best matching answers (as determined by the searcher, not the search engine).  Most people run multiple queries about complicated subjects but we are generally satisfied by only a few results per query, even if those results are not completely relevant to our search.

In this sense I use "search" to refer to the "search session for a given topic".  A "search session" might include several dozen queries about four or five topics.  We become easily distracted in our search.  But a "search session for a given topic" consists of just those queries within a "search session" that are about a specific topic (if I may beat that dead horse a little more).

What I have done here is write a couple of paragraphs of text that provide specific context to two unrelated queries.  Both of these contexts are more substantive and yet more precise than any single query is likely to be.  Unless someone searches for the exact text of a paragraph it is not likely to be the best response to a query.

Quotational queries receive little thought in the sphere of search query analysis, although search algorithms address quotational queries with some refinement.  But quatational queries are examples of keyword-less queries; they are not about "topics" or "concepts".  The majority of queries are considered to be "informational queries".  Someone wants to know something.  It may be as simple as "what time is it" or "what was the score of the Seahawks and Panthers game" (from January 17, 2016).  Or it may be as complex as "how do I drive from my house to my cousin's house 3,000 miles away".  All of these are examples of real queries people use every day.

But a more refined type of informational query is emerging, has emerged I say, from all the searching activity.  I call these "computational queries" because they require the search engine to compute something, to identify a context for the answer, or to to filter out false-positive information.  A computational query is highly likely to be one in a series of queries that are all driving toward a specific objective that is not apparent in any of the individual queries.

Some search engineers like to talk about "the Star Trek computer" and how it interacts with the people who use it.  J.A.R.V.I.S. in the "Iron Man" and "Avengers" movies is a similar ideal computer system.  It does a lot more than just answer questions and conduct searches in the background.  Both fictional computer systems take some initiative in responding to user queries.  Specifically, they interpret natural language and design their own filters.

In other words, these fictional computers intuitively know how to encapsulate context.  Apple, Bing, and Google have implemented some simple forms of context encapsulation in their interactive search algorithms, although the results are very crude.  But Googlers especially like to show off how Google Now "remembers" what you just asked about and uses that prior query to refine your next query.  This is how a "search session for a given topic" should work, but it doesn't always work as we want it to.

There are several missing components from the idealized search system.  First, Web publishers are not including enough contextual information in their content.  There is no simple, singular way to explain this.  It's one of those "you know it when you see it" situations.  If we try to solve the problem with fuzzy logic we'll end up with huge rule sets that will break the responsiveness of the search situation.  What we need are easily extractable (or inferred) rules that we embed in the content.

For example, suppose you embed a picture of the Twin Towers as they burned on September 11, 2001 in an article about New York City.  If you include a label or caption for the picture that says, "The Twin Towers as they were burning on September 11, 2001" you have provided a description of the picture which contains an implied context (the terrorist attacks on the United States on September 11).

But what if someone is searching for a different "Twin Towers"?  If you include enough negating query terms on Google you will find other "Twin Towers" search results that would indeed be of interest to people, especially people in particular communities.
twin towers -"new york city" -"world trade center" -"osama bin laden" -"september 11"
How do you teach searchers to do that?  Most people don't even know they can use the negation operator in their queries.  Of those who do, many grow weary of adding negated terms as the search engines stubbornly keep throwing up the wrong content (this is called "search fatigue").  And there may be a more efficient way to search for "Twin Towers" other than the two World Trade Center buildings that were destroyed but you search the way you search and that is usually all that happens.  We don't like to change our idiomatic queries.

A "Star Trek computer" remembers the context of the conversation and uses that to refine its searching.  Modern search engines do a poor job of that but they are improving on it.  However, because most people don't think about explaining the context of their thoughts in blog posts, on business Websites, and in product descriptions the search engines frequently have to make do with just one side of the conversation.

The Searchable Web Ecosystem consists of three symbiotic members: Publishers, Indexers, and Searchers.  The Indexers facilitate the conversation between Publishers and Searchers.  But Publishers do a very poor job of speaking to the Searchers and Searchers are ignorant of what the Publishers think is important.  Have you ever asked a car mechanic or a doctor to explain the causes of a problem to you?  "The gelfik congobulous has been redistributed across the importunous channel with a radicalized spray of undercanned mapitulites".

Thank you, now can you explain that in English?  For the average Web surfer, looking for answers to relatively simple mathematical and physics questions on the Web is a long and tortuous process that involves yelling and screaming at inarticulate holders of Ph.Ds and public school teachers who write notes for their classes on the open Web.  There is seldom sufficient context in these random blog posts, PDF files, articles, tutorials, and searchable helpful hint guides for anyone who knows nothing about the topic to understand what the writer is saying.

A scientist once told me no one is lazier and worse at communicating his thoughts to people than a scientist.  And of course the handful of scientists who were successful at popularizing scientific concepts for the public (like Carl Sagan) inevitably draw critical rebuke from other scientists who are more attuned to the real work being done in science (or whatever the complaint may be about the pop scientists of the day).

After all, it takes a lot of time to get to the point where you understand and think in all that scientifical felgercarb.  You can't be expected to explain it all in a single blog post, right?  Providing that much context is just not practical.

And so context is what is usually missing from the information we publish on the Web.  Whether you are searching for information about welding, drilling for petroleum, laying asphalt highways, breeding large cats in zoos, finding a job in a high school cafeteria or thousands of other obscure topics in which you are not expert, you most often find incomplete results that make no sense.

This is why Wikipedia is so popular.  It is the layman's guide to expert gobbledy-gook.  And, of course, when experts try to fix the layman's gobbledy-gook that prevails on Wikipedia they are confounded and confusticated by the Wikipedia editors who obsess over "neutral point of view" and "self-promotion", as if requiring one and prohibiting the other somehow improves the quality of information.

We destroy context with our assumptions, even good-hearted assumptions intended to keep us from ripping each other apart verbally.  And so with this near-universal lack of context to choose from search engines rarely serve up the best, complete answer.  To compensate, therefore, they try to remember the context of the previous queries and refine your results on the basis of what they just showed you (even though that was probably wrong).  In the end you either give up or cave in to whatever was less wrong than before.

But now we have information at our fingertips, and we can talk to the search engines, and so we want that Star Trek computer experience.  We have goals for our searches which can be gleaned if the search engines have just a little bit more context.  Local search results were the first and easiest queries to contextualize.  "Where is [NAME] restaurant?" is a popular query.  We'll often be shown a map to the nearest locations, and sometimes we'll be offered hours of operation and phone numbers.  Now search engines are even trying to make reservations for us.

It's not so easy with product search but we're getting there.  You might be looking for a very specific type of light bulb to use in an expensive lighting fixture.  Searching on the light bulb provides all sorts of useless links (that are useful in other contexts).  So you drive down to the local discount store and start wandering the isles.  Then you remember they have a Website, so you try to search their app or Website.

But either you cannot connect (because you're inside the store with no repeater and too much metal around you) or their site search sucks.  A search on Bing or Google might bring up what you need but it's not properly formatted for the mobile phone.

These are problems of context.  A smart search engine should be able to infer at some point that we are looking for a specific kind of light bulb.  We want to know how to get to it, how much it costs, and whether it is in stock.  The retailer Website may have this information or not.  The search engine may have this data in its advertising database.  But what does it take for us to get to that information?

Someone has to compute the path and lay it out.  The normal solution is for the searcher to keep changing queries and search tools.  Website publishers are almost useless although they could provide more information.   They just don't have the tools they need (yet) to see what it is that the consumers need to know.

The search engine's current ability to infer context and intention is limited.  When the consumer changes strategic thinking the search engine remains stuck in the past.  That is especially true if you are logged in to the search engine and have given it permission to analyze your search history.  You could tell a Star Trek computer to ignore your recent searches and try a different path.  Today's search engines stubbornly chain you to the data they have just analyzed.

This search friction creates a serious lag between usable search behavior and desired search results.  We have to give the search engines time to catch up to us.  Sometimes you can just clear your browser history and cookies (and that is doable on an Android phone).  But who wants to give up all their login states just to get the search engine to stop fussing over the wrong search results for one set of queries?

By computing context on every query and giving the searcher more control over what factors are used the search engine becomes more like the Star Trek computer.  We should be able to say to our phones, "Ignore all results about New York City" when we search for "Twin Towers".  And the search engine should know if a page is talking about the Twin Towers in New York City even if the name of the city does not appear on the page or in links pointing to the page.

Computational queries require that complicated information sets be put together.  Who is responsible for creating these connections?  The user should be able to say, "Tag this page for future queries about [X]" but that becomes tedious.  The publisher should be able to say, "Tag this page for queries about [A,B,C]" but then that means we have to start worrying about synonyms and homynyms and nymophobic queries (queries that avoid using brand names and precise technical terms).

We don't really want semantic search because that doesn't go far enough.   Nor do we want personalized search because that limits us to what we have already searched for and viewed.  We want computational search that figures out what we are looking for and how to show it to us.  And that may consist of more than one result.  Those multiple results may fall into disparate categories that each represent part of a greater whole, a "big picture" that doesn't appear in today's search results.

By analyzing searcher behavior search engines and publishers are slowly inching toward a better search system but they have yet to create that idealized Star Trek experience.  We are searching in fragments and writing in fragments, and we expect the search engines to figure it all out.  I am pretty sure they won't succeed without help from everyone.

Publishers are in a better position to improve the search situation than searchers because publishers can be more meticulous about providing contexts.  If you include multiple contexts in your content you will be publishing Responsive Contexts.  A Responsive Context offers several meanings, several contexts, easily inferable by algorithms, that help the document match multiple differentiated queries more easily.

When a search engineer can identify enough signals to extract a Responsive Context from random content on the Web (or highly organized content in a site search or app search tool) then he'll be able to write new algorithms that take advantage of this context.  It will work very much like Responsive Advertising does, in which the advertiser supplies multiple images of varying sizes and the advertising platform chooses which one to display on the basis of context (the size of the device screen where the ad will be shown).

The context will at first have to be explicitly embedded in the content but eventually we should be able to connect information sources to each through new types of meta information.  Relationships have to be organized and documented in some way.  We don't have a means for doing this yet, except in one singular context: the elimination of duplicate content.  We need to develop new ways to enhance content so that its greater and multiple contexts become more obvious to search engines.

Search fatigue and consumer search behavior are, for now, the mechanism triggering this evolution in the Searchable Web Ecosystem.  But publishers of all experience levels need to surge ahead of the consumers and take charge.  As publishers come to understand the complexities of context better they will create new publishing tools to help them articulate those complex contexts better, and the search engines will be able to improve their algorithms even more.