Scalable database management systems utilized in update intensive application workloads and decision support systems for descriptive and deep analytics are central to cloud infrastructure. The cloud computing
paradigm of service oriented computing is being extended to Database as a Service and Storage as a Service. Elasticity, pay-per-use, low upfront investment, low time to market, and transfer of risks are leading
attributes enabling features for deployment in organizational enterprise infrastructure.
There are challenges in transitioning to cloud infrastructure environments: scalability, elasticity, fault-tolerance, self-manageability, and ability to run on commodity hardware. Most
relational database systems were designed for enterprise infrastructures and not designed to meet all these goals. Cloud data management systems are being implemented to meet these operational
Cloud Computing Databases
The facilities and featureset of the cloud make it attractive for deploying new applications, developing scalable data management solutions, supporting update heavy applications, and ad hoc analytics and decision support. The central design objective is to build a single tenant database management system with one large database for applications. An application can start small, but
with increased popularity, the data footprint will continue to grow and extend beyond the limit of a traditional relational database.
With the Internet, a cloud distributed application can experience both explosive growth and wide fluctuations in use. Application servers can readily scale out; however, the data management infrastructure
can become a bottleneck.
Apache Hadoop Open Source Platform
Apache Hadoop is an open source platform for consolidating large-scale data from real-time and legacy sources. It complements existing data management systems with new analyses and processing tools. Hadoop was designed for the efficient reliable analysis of terabytes and petabytes of both structured and unstructured data. An enterprise can deploy Hadoop in conjunction with its legacy
information technology systems; this allows old data and new datasets to be combined.
Hadoop divides large workloads into smaller data blocks which are distributed across a cluster of commodity hardware for efficient faster processing. It is part of a larger framework of related technologies:
||This is the Hadoop Distributed File System which handles storage.
||A distributed file system that provides high throughout access to application data.
||Column oriented, non-relational, schema-less, distributed database modeled after Google’s BigTable;
it is designed for providing random, real-time read/write access to big data.
||Data warehouse system which provides a SQL interface and data structure for projection onto unstructured underlying data.
||A platform and language for managing and analyzing large datasets.
||Centralized service for maintaining configuration information, naming, distributed synchronization, and group services.
Apache Hadoop Processing Framework
Hadoop is a generic processing framework designed to execute queries and other batch read operations against extremely large datasets which are hundreds of terabytes and petabytes in size. The data is loaded into or appended to the HDFS: Hadoop Distributed File System. Hadoop then performs brute force scans through the data to produce results output into other files.
Hadoop is not a relational database; it does not perform updates or transactional processing. Hadoop also does not support indexing or a SQL interface. Open source projects are adding to these capabilities. Hadoop operates on massive datasets by horizontally scaling the processing across large numbers of servers through MapReduce. Vertical scaling is used for executing on
the a single powerful server; however this can be both limiting and expensive in terms of resources. MapReduce splits up a problem, sends the sub-problems to different servers, and lets each server solve its sub-problem in parallel. It then merges all the sub-problem solutions
together and writes out the solution into files used as inputs into additional MapReduce steps.
Hadoop has been useful in environments where massive server farms collect the data. Hadoop processes parallel queries as large background batch jobs on the same server farm. This saves the organization from having to acquire additional hardware for processing the data. It also obviates the requirement for loading the data into another system.
Hadoop - Oracle and IBM Corporation Databases
The leading commercial database enterprise software offers capabilities that Hadoop does not provide: performance optimizations, analytic functions, and declarative features for complex analysis, enterprise security, auditing, maximum availability, and disaster recovery. The Oracle Exadata Database is the technology leader;
it can coexist and complement Hadoop. The inexpensive cycles of server farms and Hadoop can be used for transforming masses of unstructured data with low information density into dense structured data which is then loaded into Oracle Exadata.
Oracle Data Integrator is based on Hive and provides native Hadoop integration. A user interface is provided for creating programs which load data to and from files or relational datastores. Oracle Loader for Hadoop implementations require Java MapReduce code to be written and executed on the Hadoop cluster. The ODI and the ODI Application Adapter for Hadoop can be used to develop a graphical user
interface for creating these programs. The ODI Application Adapter for Hadoop, generates optimized HiveQL which generates native MapReduce programs executed on the Hadoop cluster.
Oracle Loader for Hadoop is a MapReduce utility for optimized data loading from Hadoop into the Oracle Database. It sorts, partitions, and converts data into Oracle Database formats in Hadoop, then loads the converted data into the database. The preprocessing of data to be loaded as a Hadoop job on a Hadoop cluster, reduces the CPU and IO utilization on the database. It also results in faster index creation on the data once in the database. Oracle Data Integrator Application Adapter for Hadoop ODI: Oracle Data Integrator provides native Hadoop integration. Specific ODI Knowledge Modules optimized for Hive and Oracle Loader for Hadoop
are included within ODI Application Adapter for Hadoop. The knowledge modules can be used to build Hadoop metadata within ODI, load data into Hadoop, transform data within Hadoop, and load data directly into a Oracle database utilizing Oracle Loader for Hadoop.
IBM InfoSphere BigInsights Basic software is the IBM distribution of Hadoop. IBM has provided add-ons: text analysis engine, development tool, data exploration, enterprise software integration, platform administration, and runtime performance improvements. There also is a BigInsights Enterprise Edition which includes a text processing engine and library of annotators for querying and identifying items of interest
in documents and messages.
The Enterprise Edition employs IBM-specific software to enhance administration and performance. Built-in support is available for data formats: JSON data, comma-separated values, tab-separated values, character-delimited data, and others. Plug-ins can be created for processing additional data formats and executing custom functions. There is an optional job scheduling mechanism for tuning resource allocation among long- and short-running jobs. BigInsights
supports LDAP authentication to its web console. LDAP and reverse proxy support is used for restricting access to appropriate authorization.
Apply these techniques for improving Hadoop performance.
Combine processing of multiple small input files into smaller number of maps. This techniques helps alleviate the bookkeeping overhead; task JVM reuse for running multiple map tasks in one JVM and some JVM startup overhead. Combiners help the shuffle phase of the applications by reducing network traffic. The shuffle phase passes this input to the Reducer and is the sorted output of the mappers. The Reducer reduces a set of intermediate values which share a key to a smaller set
of values. Used appropriately, a Combiner, will reduce significantly the data sent from the maps.
An excessive number of maps and a large number of maps with extremely short run-times is counter productive. Maps should be sequentially sized to map-outputs which can be sorted in a single pass within the sort-buffer.
Applications need to be designed for ensuring a reduction in processing from a minimum = 1 - 2 GB of data to a maximum = 5 - 10 GB of data. Use larger HDFS block-sizes for processing very large datasets.