Posts tagged ‘fast search qps performance sizing crawl database’

January 12, 2011

Capacity planning for FAST Search Server 2010 for SharePoint

Information on FAST Search Server 2010, though better than it has been, is still scarce. Particularly its application in large environments is largely undocumented, though there is evidence that, at least internally, it has been proved. FAST ESP of course is well proven on large corpus’ of data and has been used in very large enterprises.

When it came to my latest project, to architect the FAST search component of a system with a highly active userbase of 140,000, and a reasonably sized corpus of data (240 terabytes); with FAST Search Server 2010 for SharePoint chosen as the search server, I started to discover there were some holes, or at least some level of disjointing in the data presented by Microsoft.

The purpose of this article is to draw together some of this knowledge into a more coherent form to help future architects designing the topology and in particular ensuring that their design is performant and scalable.

This article assumes a reasonable knowledge of SharePoint and FAST search architecture.

FAST farms consist of two composite components:

  • Service servers: one or more servers which holding one or more of the core roles. (Technet article)
  • Search cluster matrix: one or more servers in a row/column structure to handle the indexing and query matching components. See the above article for detailed information on the performance of these services.

There are two primary drivers when considering the topology of our FAST farm. The size of the corpus, and the performance needed. In my case the performance I was aiming for was 350 queries per second (qps). The general rule of thumb with FAST is that you can reasonably expect 1qps performance for each 1ghz core available to the query matching component of FAST. This is assuming that you are using the search cluster model topology, and not a single server deployment.

Since we are looking for a performance of 350 qps, we can assume we need 350 cores at 1ghz, or 175 cores at 2 ghz. Since our target servers are running dual quad-core CPUs @ 2.93 ghz, we essentially have the ability to run 23.44 qps per query matching server ((1 x) 2.93 x 8). Dividing 350 through by 23.44 we round up to get 15. So we need 15 of our servers running the query matching service to achieve 350 qps throughput.

Once we know we need 15 query matching servers (ie. servers in the search row component of the clusters) we need to ensure this is arranged correctly in the server matrix to ensure that we can handle the corpus of data we need. In my case, we have a 30 million item corpus, however; allowing for reasonably growth, I am allowing for 60 million items. Since we can have a performant maximum of 15 million items per index column this means we are looking at 4 index columns.

The query matching servers MUST be distributed evenly between the index columns, so the closest we can get to 15 query matching servers is having 4 index columns with 4 query matching servers in each; a total of 16 query matching servers.

Each index column should also have at least one server running the indexer service. An additional index server can be used, but is present purely for failover.

On the storage sizing side of things, we have two types of storage needed:

  • Local directly attached storage on the indexer boxes. This is used for the storage of the indices; and is stored in a flat file format. A good rule of thumb is to allow approximately 20% of the total corpus size for the indices.
  • SQL Database storage for the Crawl Databases. These are used by the Content SSA in SharePoint to store metadata on the corpus crawled. A general rule of thumb is to use 5% of the total corpus size (in my case 240 terabytes, so a crawl DB allowance of 12 terabytes.) These can be scaled out across multiple crawl databases, and the best advice would be to host these on a separate box to the SharePoint database as they will be extremely intense on disk I/O; especially with a large, fast changing corpus.
Advertisements