New Feature: Sell more to task-oriented shoppers with improved model search

share in social

This brand new feature boosts conversion rates for ecommerce merchants by improving UX for shoppers who conduct searches for makes, models, and SKU numbers.  

The latest feature released by Findify developers sees the leading Site Search and Personalization software company improve its Personalized Search function even further – this time with improved model search functionality to help shoppers quickly and easily find (and buy!) the products they are looking for. 

What is model search, and who uses it? 

A model search, also referred to as a SKU search, takes place when a shopper types a product number/product code into the search bar of an ecommerce store. 

This search could be written in the form of ‘Product + Model’ along the lines of  ‘Iphone 11’, ‘Sony A6000’, or ‘Fujifilm X-T3’, or it could take the more ambiguous form of something like ‘rt9027483’. 

Merchants most likely to see customers searching using SKU-style parameters are largely those selling cameras, laptops, parts, mobiles, gaming equipment, automotives, and home electronics. 

An example of model search in action for Findify client Cyberphoto. The search term ‘nikon d5’ now includes Nikon D500 and D5000 – a full range not available through purely tokenized engines. 

How hard is it to build an effective search solution for SKUs?

While getting this type of search right is extremely difficult, it’s well worth the effort. Those searching for products in this way usually know exactly the make and model they want to buy, or at least the exact product range they want, meaning they are shoppers with a very high purchase intent and are, therefore, particularly valuable visitors.

The issue, however, is that a model search is notoriously hard to perfect.

“For normal, word-based searches, we’ve built a system, based on Tokenization, where shoppers don’t need to type out the entire word, or to even type their query correctly, and they will still get relevant results,” explained Findify’s Seva Goloviznin.  

“But model search/SKU searches need to be treated differently. While tokenization is great for word-based searches, there are limitations to this method when applied to model search and SKU search. Up until now, this has meant that shoppers who wanted to search using an SKU had to type their query in perfectly to get the product they wanted to find.”

Of course, it isn’t always possible for shoppers to input the SKU exactly right, and merchants shouldn’t expect them to. Perhaps the shopper doesn’t know the full SKU, or they don’t realize there’s a hyphen or space in it… or they simply don’t want to spend time writing the full string. 

Let’s say, for example, there is a product called an ‘X-1/1’. Shoppers can, and will, search for this in the following ways ‘X-1/1’, ‘X1/1’, ‘X11’, ‘X1’, ‘X-11’, ‘11’, ‘X’. If these searches don’t yield relevant search results, the shopper is at a high risk of churning and taking their business elsewhere, assuming the store doesn’t have what they’re looking for. 

In other cases, more related to discovery, some shoppers might just want to see all the products in a certain range, so they want to type ‘T5’ and view not only the Lenovo ThinkPad T560 but also the T550 and the T540p. 

So what’s the solution?

Up until now, the best solution has been to tokenize the search results. So, in accordance with our previous example,  ‘T’ becomes one token, and ‘560’ becomes another token. Longer SKUs would also have subtokens. When a user enters a query, a collection of documents containing these tokens are searched for the best matches. 

While this system can work well, it has limitations. Firstly, it is a huge undertaking to create so many tokens, and secondly it creates a large amount of data for our system to process, meaning it will likely take a slightly longer time for search results to populate.

Other ecommerce site search providers have attempted to solve this problem with prefix search, however Findify developers quickly realized this was just switching one set of problems for another – this “fix” decreases both precision as well as the overall site experience. 

This is why Findify developers have implemented a new system, utilizing border-ngrams. This method means there is no need to create an endless amount of tokens, and also means searches take less time as there is less data to screen. The icing on the cake is that Findify’s border-ngram method also makes model search results a lot more relevant. 

With Findify’s model search, queries for ‘microsd’ yield a full range of results, including close ranges of products like microsdx, microsdxc etc. 

What is a border-ngram and how is it used?

Put simply, an ngram is a type of probabilistic language method for predicting the next item in a sequence. However a full ngram generation may quickly explode index size, slowing the search down. 

A border-ngram, therefore, is a compromise between sensitivity and index size, ensuring relevant results are returned without generating a huge amount of data and adversely impacting speed. 

Uniquely utilizing this method, Findify’s now improved search solution ensures that whenever a shopper types a search query that contains both letters and numbers, the system will recognize this as a potential model search, activating the border-ngram. This ensures that customers opting for a SKU search will always find exactly what they’re looking for, regardless of misspellings, benefitting both the shopper and the merchant alike. 

For more information on Findify’s powerful ecommerce tools, including personalization software and solutions such as Personalized Search, Smart Collections, and Recommendations, book a demo here.

Ready to talk?

Book a non-binding demo session for a detailed discussion at how exactly Findify can help you improve customer experience and drive sales on your e-store.