Oracle Database 12c: In-Memory
In developing In-Memory, Oracle sought to create a technology that enables real-time analytics for rapid business decision-making. It is extremely important that if Oracle competitors need a different database, and different technologies to use their In-Memory options, then the Oracle Database In-Memory option is built into the database, is enabled by literally one parameter, is completely transparent to applications and is compatible with all database capabilities. Customer experience with this option shows that transaction processing is twice as fast, row inserts are three to four times faster than usual, and analytics queries are actually executed in real-time, almost instantly.
The essence of the technology is that next to the usual buffer cache, which stores table rows and index blocks, a new shared area is created for data in RAM, in which they are stored in a columnar format. Thus, the technology uses both row and columnar storage formats in memory for the same table data, with the data being both active and transactionally consistent. All changes are first made in the traditional buffer cache and then reflected in the column cache.
In this case, only tables are reflected in the column cache, indexes are not cached. In addition, if the data is read, but not changed, then there is no need to store it in the buffer cache, but if the data changes, then it is stored in both caches, buffer and column. Therefore, In-Memory speeds up the work of analytics, because columnar data storage is more efficient for analytics.
In addition, the In-Memory option allows you to get rid of analytical indexes without sacrificing performance, while there will be flexibility: disk space is saved, you can build a query on any column that is located in In-Memory, and you do not need to build additional indexes for fast query performance.
Hardware support is an essential element of Oracle Database In-Memory. In particular, the technology supports a set of SIMD (Single Instruction Multiple Data Values) instructions designed for graphics processing – In-Memory uses these instructions if they are embedded in the processor, to compare multiple column values with a predicate at once, significantly speeding up the column scan speed – up to 1 billion lines per second.
But that’s not all. Oracle SPARC M7 and T7 servers released in late 2015 include hardware support for In-Memory. To do this, the M7 and T7 processors have added a vector database scanning module, an In-Memory data decompression module and a hardware memory protection module, which implements real-time verification of access to data in RAM, ensuring data protection from malicious intrusions and program code errors …
In order to use Oracle In-Memory, it is enough to set the size of the In-Memory Column Store memory buffer, specify which tables, sections, columns will be located in this memory, restart the database, and drop analytical indexes if they are no longer required to ensure performance applications. In-Memory is easily managed from Oracle Enterprise Manager, which has a separate In-Memory Central page that displays memory allocation between objects and allows you to configure the In-Memory Column Store. The latest version of Enterprise Manager 13c has an In-Memory Advisor tool, supported for database versions 22.214.171.124 and higher, which analyzes the existing database load and provides a list of objects that will give the maximum benefit when loaded into the In-Memory Column Store.
New opportunities for developers:
We’ve already talked about the shift from heavy monolithic applications to the IT industry towards web services. As web services increasingly access each other through a REST interface, Oracle provides the Oracle REST Data Services (ORDS) Java application, which provides a single REST interface for working with Oracle RDBMS (relational data and JSON Document Store) and Oracle NoSQL Database. ORDS can be used both offline and deployed on application servers WebLogic Server, Oracle Glassfish Server, Apache Tomcat. SQL Developer provides a convenient platform for installing and configuring ORDS, in particular, it contains a configuration wizard that automatically creates REST services to access database tables.
A VirtualBox virtual machine with configured Big Data Lite Virtual Machine and configured REST services is available for free on Oracle Technology Network. Since the same REST call can be applied to different databases, this increases the flexibility and speed of programming. the developer is not required to have knowledge of SQL and database specifics. Oracle Database 126.96.36.199 has built-in support for JSON databases. REST services can work with either the JSON Document Store in a version 12c database, or relational database tables that are represented as REST Data Services, or NoSQL databases.Oracle Database 188.8.131.52 has built-in support for JSON databases. REST services can work with either the JSON Document Store in a version 12c database, or relational database tables that are represented as REST Data Services, or NoSQL databases.Oracle Database 184.108.40.206 has built-in support for JSON databases. REST services can work with either the JSON Document Store in a version 12c database, or relational database tables that are represented as REST Data Services, or NoSQL databases.
Oracle Database 12c and Big Data:
Oracle Big Data Appliances are clusters designed to run Hadoop and NoSQL databases. Unlike other Oracle software and hardware systems, these systems are developed in collaboration with Cloudera, one of the leading suppliers of Hadoop distribution. Contrary to popular misconception, such systems are needed not only by companies from the Internet business, because today any companies that have to engage in an in-depth analysis of customer behaviour, plan high-precision advertising, combine and analyze data from many sources, including the number of unstructured, fight against fraud, etc.
Oracle Big Data SQL as part of the Oracle Big Data Appliance allows you to make one fast SQL query from Oracle Database 12c to all data stored in Hadoop, relational and NoSQL databases. Oracle Big Data SQL is a new architecture that offers powerful, high-performance SQL on Hadoop, with the full power of Oracle SQL on Hadoop, and local processing of SQL queries on Hadoop nodes. The architecture offers easy data integration of Hadoop, Oracle Database and Oracle NoSQL, a single SQL entry point to access all data, and scalable connections between Hadoop data and RDBMS.
Oracle NoSQL Database is a scalable, high-performance, highly available transparent load-balancing database management system that stores all data in key-value pairs.
Improvements in Oracle Multitenant Features:
New features of Multitenant databases of version 220.127.116.11 relate primarily to cloning PDB (pluggable db, pluggable) databases. Some of the table spaces can now be excluded from cloning. It is possible to clone only metadata, which is sometimes required for development. Remote cloning allows you to clone a PDB database between two container databases via a database link. Finally, there is thin cloning based on the Direct NFS technology built into the database and is independent of the file system.
Other enhancements include a new SQL expression that allows aggregated queries to be made over tables that are located in multiple pluggable databases. The new phrase “standbys” allows you to explicitly set or cancel the creation of a standby database when creating a pluggable database.
Advanced Index Compression technology compresses indexes to reduce the amount of disk space used (in some databases, the index takes up half of the disk space) and more efficient use of the cache.
Full database caching. Enabled automatically to take advantage of all available memory and potentially improve performance if the database fits in memory. It is possible to force full caching, including tables with NOCACHE LOB objects, by using the ALTER DATABASE FORCE FULL DATABASE CACHING command.
Automatic caching of large tables. Can be used if the entire database does not fit in memory, but some large objects do. The DB_BIG_TABLE_CACHE_PERCENT_TARGET parameter allows you to allocate a separate area in the buffer cache for large tables. If data is cached at the block level in a regular buffer cache, then large tables are cached and deleted from that cache area entirely based on the frequency of access.
The Attribute Clustering directive for a table arranges data by column values, with rows with the same column values lying together on the disk. This directive works during direct data load operations, such as bulk inserting records or moving a table. Attribute Clustering can be useful for compressing data because ordered data is better compressed using the Advanced Compression option. But Attribute Clustering benefits most when used in conjunction with another new Oracle Database 18.104.22.168 feature, Zone Maps. Zone maps available on Oracle Exadata or Supercluster provide the minimum and maximum specified column values for row ranges and allow you to quickly filter out unneeded data when you run a query. These technologies are completely transparent to applications, they improve query performance, reduce the number of physical reads, significantly reduce I / O for high-selectivity queries, and optimize disk usage.
When you want to make a strong Oracle DBA career then you should be aware of database services and other database technology. Without having knowledge of Oracle internals , Oracle performance tuning, and skill of Oracle database troubleshooting you can’t be an Oracle DBA expert. This expert DBA Team club blog always provides you latest technology news and database news to keep yourself up to date. You should need to be aware of Cloud database technology like DBaaS. All Oracle DBA tips are available in a single unique resource at our orageek. Meanwhile, we are also providing some sql tutorials for Oracle DBA. This is the part of Dbametrix Group and you would enjoy more advanced topics from our partner resource.