There is an evolution of oracle, and one of the most important evolutions is 19c. You should know about the new features of this release, and of course, 19c will be the next long-term-supported release.
Let’s start with the first thing: Whenever you are doing a top-level data export from an Oracle database, what will happen is that the export will probably fail. It’s a “feature” of the Data Pump technology. The reason is that Data Pump tries to be clever by creating all the necessary objects before it starts exporting data. However, it can’t always be clever and sometimes it needs a little help. That’s where Test Mode comes in. Test Mode allows you to create a temporary object and then drop them after you are done.
There is no option to fix this problem until you run the data pump in test mode. The data pump will only create a temporary dump file. You can use this dump file to verify the export, but you cannot use it for an import. This is a serious limitation, and I would advise against using the data pump for a production environment.
The beauty of this test mode is that you can see whether you might encounter an error in the actual export or not that’s a new feature worth 19c. Let’s look at the next feature that we have in 19c is Max Data Pump Parallel which uses the worker process to perform the export or the import. This allows you to add more.
The number of worker processes is the most important parameter in the exporter import wizard. It determines how fast you can perform the export or import. If you have more worker processes, you can add more segments to the data pump which increases the speed of the export or import.
With only one log file for the listener, it was hard to manage with 19c. It allows you to define the maximum number of log files that you can have for your listener, and also what is the max size for each log file. For example, in one of our configurations, we have set the max log files that are to be generated to eight, and also we have set the max size for each log file to 2MB.
Set the size of each locked file cannot exceed 1 MB so when the listener generates the first log file it will be 1 MB next it will generate the second log file then the third log file and so on when it reaches the maximum log file size that is eight what happens the first generated log file will be deleted and the ninth log file will be created and that’s how it moves on.
It means at any given point of time you will have eight log files in the system for the listener. That’s the new setting we’ve let you look at. The next 19c parameter lets you automatically index what you’re selling. Guess what? It does all the work for you! It will create the index for you, rebuild the index if necessary, and drop the index if you ask it to. It’s called auto, and I think you’ll find it quite useful.
Oracle’s auto-indexing feature determines what gets indexed, based on the application workload. Depending on the needs of the application, Oracle decides whether to create an index, rebuild an existing index, or drop an index that is no longer needed. This new feature was designed by yours truly, and I think you’ll find it very useful when you are working with Oracle Database 19c.
If you’ve got a job running stats reports and a subsequent query running at the same time, you should always check the current stats against the previous stats. Sometimes there will be a big difference, sometimes not so much. But, it’s good practice to do this just to make sure your results are as accurate as possible.
The gap right there should be closed. What I’ve just described is something called “statistics gathering”. It’s what most DBAs hate with a passion. It’s what makes them want to punch someone in the face. However, as I said earlier, you can’t get good performance out of a system without good statistics. So, if you’re going to do any serious data mining, you’ve got to know how to gather the real-time stats from your database and use them to make your queries faster.
Active Data Guard:
It should come as no surprise that I highly recommend using oracle active data guard redirection. It’s a neat little trick that lets you run DML statements on an active data guard and have the data guard automatically redirect those dmls to the primary database. This allows you to maintain two different database configurations in parallel. One with the primary database and one with the data guard configuration.
Data Guard DML Redirection lets you configure it so that if the primary server goes down your applications will automatically start using the standby server without any downtime. The best way to do this is to configure the active data guard to redirect the dml commands to the standby which is the active data guard.
This will ensure that even in the event of a failure, your database transactions will be handled properly. It is also important to understand that when you are using Data Guard Mirroring, it is recommended that you use synchronous mirroring. Asynchronous mirroring can cause deadlocks and other performance issues. Also, if you are using asynchronous mirroring and one of the mirrored databases becomes unavailable, you may experience some or all of your application downtime.
This is a very important concept in data management. You see, with 18c, you could make changes to the primary data guard and have them replicated to all secondary data guards. However, with 19c, you can make changes to the active data guard and have them replicated to all other data guards. Changes will be permanent only when they are committed. This means you can make changes to your active data guard and not have to worry about losing those changes because replication to the other data guards will not occur until after you have “committed” those changes.
Remember, uncommitted changes will be rolled back by the database engine. This is one of the reasons it is important to create a backup of your data at least once every day. Also, if you do a lot of testing with your data, you may want to take backups more often than daily.
From active data guard to the primary, and then to the secondary, it will be made permanent only if the user commits those changes else no it will wait until the commands are committed to let’s move on to the next feature that we have, “Flashback Standby” when the primary is “flashback guys” in the previous releases, “let’s take you back” in time what happens is as you move.
The best way to make sure you are prepared for any challenge is to have two fully functional copies of your data, one on your primary server and one on a secondary server. By doing this you’ll simplify your life as a DBA by having your primary copy of your data automatically brought up to date by using your secondary copy as your “standby”.
Look at the next feature that we have propagated restore point from primary to standby normally guys if you have a restore point on your primary database what happens let’s say you perform the switch over the restore point information is lost on the standby database because on primary the restore point information was there on the primary control file this restore point information is never available in before earlier versions. Oracle Database Auto-Upgrade is another great new feature of Oracle 19c. This feature enables you to configure your instance to automatically upgrade to the latest patch set level as soon as possible without requiring any intervention from you. This is an important feature for all of our customers who run mission-critical applications on Oracle databases and we are sure that this feature will be very useful for all of you.
Automatic failover is the second most asked question by clients when it comes to upgrading the Oracle database. With this feature, you don’t have to do anything. The upgrade will happen automatically whenever the standby server becomes active.
So guys, if you know what the pre-upgrade, the exact upgrade, and the post-upgrade phases are, you know how to perform a database upgrade, and you know which BPA (Business Process Automation) tool to use, then you are more or less ready for the next steps. The next step is to define the process flow chart of your database upgrade.
The three phases are taken care of by the auto-upgrade utility completely end-to-end and you just need to fire one single command to assume how simple your life will become if you fire only one upgrade command which takes care of the pre-upgrade part of the main upgrading of the database and also the post-upgrade tasks all taken care by the auto-upgrade utility.
You know what? There is no requirement for making changes to the initialization parameters the auto-upgrade utility will take care of this thing as well. That’s the beauty of the auto-upgrade utility. It lets you perform both an upgrade from 12c to 19c and also upgrade from 12c to 19c database. So, in other words, you don’t have to do this stuff manually, which means you can spend more time writing copy for your website and less time fiddling with configuration files.
When you want to make a strong Oracle DBA career then you should be aware of database services and other database technology. Without having knowledge of Oracle internals, Oracle performance tuning, and skill of Oracle database troubleshooting you can’t be an Oracle DBA expert. This expert DBA Team club blog always provides you latest technology news and database news to keep yourself up to date. You should need to be aware of Cloud database technology like DBaaS. These all Oracle DBA tips are available in a single unique resource at our orageek. Meanwhile, we are also providing some sql tutorials for Oracle DBA. This is the part of Dbametrix Group and you would enjoy more advanced topics from our partner resource.