BigTable
This article may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. (February 2009) |
BigTable is a compressed, high performance, and proprietary database system built on Google File System (GFS), Chubby Lock Service, and a few other Google programs; it is currently not distributed or used outside of Google, although Google offers access to it as part of their Google App Engine.
History and Design
BigTable development began in 2004[1] and is now used by a number of Google applications, such as MapReduce, which is often used for generating and modifying data stored in BigTable[2], Google Reader,[3] Google Maps,[4] Google Book Search, "My Search History", Google Earth, Blogger.com, Google Code hosting, Orkut[4], and YouTube[5]. Google's reasons for developing its own database include scalability, and better control of performance characteristics.[6]
BigTable is a fast and extremely large-scale DBMS. However, it departs from the typical convention of a fixed number of columns, instead described by the authors as "a sparse, distributed multi-dimensional sorted map", sharing characteristics of both row-oriented and column-oriented databases. BigTable is designed to scale into the petabyte range across "hundreds or thousands of machines, and to make it easy to add more machines [to] the system and automatically start taking advantage of those resources without any reconfiguration".[7]
Each table has multiple dimensions (one of which is a field for time, allowing versioning). Tables are optimized for GFS by being split into multiple tablets - segments of the table as split along a row chosen such that the tablet will be ~200 megabytes in size. When sizes threaten to grow beyond a specified limit, the tablets are compressed using the algorithm BMDiff[8] (referenced in [9]) and the secret algorithm Zippy[10], which is described as less space-optimal variation of LZO but more efficient in terms of computing time. The locations in the GFS of tablets are recorded as database entries in multiple special tablets, which are called "META1" tablets. META1 tablets are found by querying the single "META0" tablet, which typically has a machine to itself since it is often queried by clients as to the location of the "META1" tablet which itself has the answer to the question of where the actual data is located. Like GFS's master server, the META0 server is not generally a bottleneck since the processor time and bandwidth necessary to discover and transmit META1 locations is minimal and clients aggressively cache locations to minimize queries.
Other Implementations
Open source
- HBase- Written in Java. Provides Bigtable like support on the Hadoop Core[11].
- Hypertable- Hypertable is designed to manage the storage and processing of information on a large cluster of commodity servers.[12]
- Cassandra- Facebook's distributed storage system based on a Bigtable data model on Amazon's Dynamo
- Project Voldemort - LinkedIn's low-latency distributed key-value storage system.
Google Fusion Tables
- Google Fusion Tables was released on June 9, 2009, as an experimental system for data management in the cloud.[13][14]
See also
References
- ↑ "First an overview. BigTable has been in development since early 2004 and has been in active use for about eight months (about February 2005)." Google's BigTable
- ↑ "Bigtable can be used with MapReduce, a framework for running large-scale parallel computations developed at Google. We have written a set of wrappers that allow a Bigtable to be used both as an input source and as an output target for MapReduce job". pg 3 of "Bigtable: A Distributed Storage System for Structured Data", 2006
- ↑ "Reader is using Google's BigTable in order to create a haven for what is likely to be a massive trove of items." Official Google Reader blog.
- ↑ 4.0 4.1 "There are currently around 100 cells for services such as Print, Search History, Maps, and Orkut." Google's BigTable
- ↑ "Their new solution for thumbnails is to use Google’s BigTable, which provides high performance for a large number of rows, fault tolerance, caching, etc. This is a nice (and rare?) example of actual synergy in an acquisition." YouTube Scalability Talk
- ↑ "We have described Bigtable, a distributed system for storing structured data at Google....Our users like the performance and high availability provided by the Bigtable implementation, and that they can scale the capacity of their clusters by simply adding more machines to the system as their resource demands change over time...Finally, we have found that there are significant advantages to building our own storage solution at Google. We have gotten a substantial amount of flexibility from designing our own data model for Bigtable." from the Conclusion of "Bigtable: A Distributed Storage System for Structured Data", 2006
- ↑ *"Database War Stories #7: Google File System and BigTable"
- ↑ Google, Bigtable, Compression, Zippy and BMDiff
- ↑ Data compression using long common stringsBentley McIlroy DCC ‘99 Data Compression Using Long Common Strings
- ↑ Google's Bigtable
- ↑ Hadoop CoreHBase - Hadoop Wiki, Background section
- ↑ [1]
- ↑ Google Fusion Tables
- ↑ Google Fusion Tables - Research Blog
External links
- Bigtable: A Distributed Storage System for Structured Data -(official paper; PDF)
- BigTable: A Distributed Structured Storage System (video)
- more video
- Google's BigTable -(notes on the official presentation)
- "How Google Works"
- Is the Relational Database Doomed ?
If you like SEOmastering Site, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...