Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 1 hour 57 min ago

SQL Server availability groups, SQL Browser and Shared Memory considerations

3 hours 56 min ago

Few weeks ago, my colleagues and me discussed availability groups and network considerations for one of our customers including disabling SQL Browser service and shared memory protocol. The point was disabling both features may lead to unexpected behaviors when creating availability groups.

blog 145 - 0 - AG network banner

Let’s start with the SQL Browser service. It is not uncommon to disable this service at customer shops and to use directly SQL Server listen ports instead. But if you go trough the availability group wizard you will find there a plenty of blockers when actions than require connecting to the secondary replica as adding a database, performing a failover and so on.

Disabling the SQL Browser service doesn’t mean you can not reach out your SQL Server instance by using the named instance format SERVER\INSTANCE. There are some scenarios that’s work perfectly including either connecting from the local server through the shared memory or by using SQL Server aliases. Let’s say my infrastructure includes 2 AG replicas vmtest2012r04\SQL2014 and vmtest2012r05\SQL2014. SQL browser is disabled and shared memory is enabled on each. There are no aliases as well. If you try to connect from the vmtest2012r04\SQL2014 by using named instance format it will work on the local replica (through shared memory) but it won’t work if you try to connect to the remote replica vmtest2012r05\SQL2014. In the latest case, you have will to use SERVER,PORT format as shown below:

C:\Users\dab>sqlcmd -S vmtest2012r204\SQL2014 -Q"SELECT 'OK' AS connection"
connection
---------
OK

(1 rows affected)

C:\Users\dab>sqlcmd -S vmtest2012r205\SQL2014 -Q"SELECT 'OK' AS connection"
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : SQL Server Network Inte
rfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. .
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login timeout expired.
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : A network-related or in
stance-specific error has occurred while establishing a connection to SQL Server
. Server is not found or not accessible. Check if instance name is correct and i
f SQL Server is configured to allow remote connections. For more information see
 SQL Server Books Online..

C:\Users\dab>sqlcmd -S vmtest2012r205,1453 -Q"SELECT 'OK' AS connection"
connection
---------
OK

 

But I guess this is not a big surprise for you. This kind of configuration works well with availability group but at the cost of some compromises. Indeed, creating an availability group remains pretty easy and you just have to keep using SERVER,PORT format when the wizard asks for connection information.

blog 145 - 1 - AG wizard - replica

But the game is different for adding a database to the AG or at least any operation that requires to connect to the replicas. In this case the Wizard forces to connect to the second replica by using SERVER\INSTANCE format leading to get stuck at this step.

blog 145 - 2 - AG wizard - add DB

The only way is to go through T-SQL script (or PowerShell command) to change the format to SERVER,PORT. Probably something that may be fixed by Microsoft in the future.

Let’s add now to the equation disabling the shared memory protocol on each replica. I met some customers who disable it to meet their internal best practices because their applications are not intended to connect locally on the same server than their database engine. At the first glance, this is not a bad idea but we may get in trouble with operations performed on availability group architectures. This is a least what we experienced every time we were in this specific context. For instance, if I try to create an availability group, I will face the following timeout error message:

blog 145 - 3 - AG wizard - shared memory disabled

This is a pretty weird issue and to get more details, we have to take a look at the cluster log. Here the interesting sample of messages we may find out:

...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08001] [Microsoft][SQL Server Native Client 11.0]SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF].  (268435455)
...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [HYT00] [Microsoft][SQL Server Native Client 11.0]Login timeout expired (0)
...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08001] [Microsoft][SQL Server Native Client 11.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. (268435455)
...2018/10/22-20:42:51.436 INFO  [RES] SQL Server Availability Group <AG2014>: [hadrag] Could not connect to SQL Server (rc -1)
...2018/10/22-20:42:51.436 INFO  [RES] SQL Server Availability Group <AG2014>: [hadrag] SQLDisconnect returns following information
...2018/10/22-20:42:51.451 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08003] [Microsoft][ODBC Driver Manager] Connection not open (0)
...2018/10/22-20:42:51.451 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] Failed to connect to SQL Server
...2018/10/22-20:42:51.451 ERR   [RHS] Online for resource AG2014 failed.

 

It seems that the RHS.exe, through the resource associated to my AG, is not able to connect to the SQL Server replica during the initialization phase. According to the above cluster log, the ODBC connection seems to be limited to connect by using INSTANCE\NAME format and as I far as I know there is no interface to change it with the AG cluster resource DLL (thanks Microsoft guys for confirming this point). Therefore, disabling both SQL Browser and shared memory leads to the AG cannot be brought online because a communication channel cannot be established between the primary and the cluster service. My friend MVP Christophe Laporte tried also some funny tests by trying to create custom DSN connections without luck.   So, the simplest way to fix it if you want to keep disable the SQL Browser service is to enable the shared memory on each replica. Another workaround may consist in using SQL aliases but it leads to a static configuration that requires to document well your architecture.

In a nutshell, disabling SQL Browser limits the AG operations that can be done through the GUI. Adding the shared memory to the equation may have a bigger impact to the underlying WSFC infrastructure that you have to be aware of. According to my tests, this behavior seems to be same with versions from SQL2012 to SQL2017 (on Windows) regardless the WSFC version.

Hope this helps!

 

 

 

 

 

Cet article SQL Server availability groups, SQL Browser and Shared Memory considerations est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 1

15 hours 55 min ago

The first session I assisted today was Oracle Active Data Guard: Best Practices and New Features Deep Dive.
This session was done by Nitin Karkhanis Director of Software Development managing the Data Guard Broker development team and Mahesh Girkar Senior Director of Software Development in Oracle’s Database Division. His team is responsible for developing High Availability features for Data Guard and Active Data Guard..
It was really a very interesting session. It was divided in two parts: The new features for 18c and new features they will implement for Oracle 19c.

Some Active Data Guard New Features for Oracle 18c

>Multi-Instance Redo Apply support now change block tracking
>Data Guard and Database Nologging mode
>The database buffer cache will be preserved on an Active Data Guard during role change
>Creating private temporary table is supported in Active Data Guard
>Better protection against failed logins

Some New Data Guard Broker Commands for Oracle 18c
dgmgrl > show all;
dgmgrl > set debug ON | OFF
dgmgrl > set echo ON | OFF
dgmgrl > set time ON | OFF
dgmgrl > validate  database boston spfile;
dgmgrl > validate  network configuration for boston;
Some New features for Data Guard in 19c

>Multi-Instance Redo Apply will work with the In Memory Column Store
>Global Temporary Tables can be now created and dropped in an Active Data Guard Standby
>Tunable Automatic Outage Resolution. The parameters that control the wait time that determine a hung process will be now documented
>DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME. In former versions these parameters were hidden.
>In the future is flashback is done at the primary database no action will be needed on standby side. We just have to mount the standby
and Oracle will do the rest

Some Data Guard Broker Feature in 19c

TRACE_LEVEL replaces DEBUG

dgmgrl > set TRACE_LEVEL USER|SUPPORT 

New Commands to Set Database Parameters

dgmgrl>EDIT DATABASE SET PARAMETER parameter_name=value
dgmgrl>EDIT DATABASE RESET PARAMETER parameter_name=value

New command to export and import broker configuration file

dgmgrl> export configuration to ‘meta.xml’
dgmgrl> import configuration from ‘meta.xml’ 

Properties now pass through to Initialization Parameters
>DataGuardSyncLatency (DATA_GUARD_SYNC_LATENCY)
>StandbyFileManagement (STANDBY_FILE_MANAGEMENT)

Another session I attended was Best Practices for Maintaining Oracle RAC/Single Instance Database Environments presented by Bill Burton Consulting Member of Technical Staff Oracle
Scott Jesse Senior Director,Customer Support, DB Scalability, Security, Networking, SSC Oracle
Bryan Vongray, Senior Principal Technical Support Engineer, Oracle.
In fact according to their statistics more than 50% of opened SR concern well kown issues. So in this session they present some tools that can help for troubleshooting and monitoring RAC and Single Instance. They present
TFA
OraCheck and ExaCheck
By many illustrations, the speakers explain how to use the different tools to diagnose our environment.

The last session I attended was Inside the Head of a Database Hacker by Mark Fallon, Chief Security Architect, Oracle

The speaker tried in simple words to understand the motivations and the way the hackers act.
This will help to protect any environment. He comes to the conclusion that the security needs to be a collaboration
>Database Security
>Application Security
>Network Security
>End-point Security
>Process
>Employee Education
>Physical Security
>Supply Chain Security

So we will come back tomorrow for another briefing of our day

Cet article Oracle OpenWorld 2018: Day 1 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D1: Microservices Get Rid of Your DBA and Send the DB into Burnout

16 hours 55 min ago

I had the pleasure to attend this morning (23.10.2018) to the session of Franck Pachot about Microservices. Between 70 and 100 peoples were present in the room to listen the ACE director, OAK table member and OCM speaking about microservices.

Franck Pachot - microservices

Franck introduces microservices based on the fact that customers could want to get rid of their databases.

Getting rid of your database because it’s shared, it contains persistent data and because you query it with SQL could be good reasons. With smaller components you can share less and you can dedicate each component to an owner. With that it mind comes the idea of micro-services. Of course such reasoning has many limits such as the fact that micro-services shouldn’t have data in common.

Usually you query databases with SQL or PL/SQL. However SQL is a 4th generation language and  SQL developpers are rare. SQL is not only too complicated but also not portable. It’s even worse with PL/SQL and T-SQL.

Solution: microservices with easier technology, development offshored. This is precisely what Franck spoke about in his session

Indeed he did a demo with two tables (account and customers tables). He transfered few dollars from one account to another. At first using SQL, then with PL/SQL, JavaScript on the client and then JavaScript in the Database using MLE (Multi Language Engine) and checked CPU time for each of these methods. The results are the following:

  • SQL – 5 seconds of CPU
  • PL/SQL – 30 seconds of CPU
  • JavaScript on client – 2 minutes of CPU (45s on the client and 75 into the database)
  • JavaScript in DB (MLE) – 1 minute of CPU

SQL Statement

What is particulary interesting here is that you can think that you will offload the database by executing this statement with java on the client. Such wish could be motivated by decreasing the CPU power and therefore Oracle licensing footprint. However it is exaclty the opposite that Franck proved. You will multiply by at least twice the CPU power required to execute the same operation. Running on through different engine, process, machine does not scale and burns more CPU cycle in each tier.

The difference between SQL and PL/SQL which are running in the same process is due to context switches.

The diffrence betwen SQL and JavaScript on the client is due to context switches on the service, context switch on the client but also network latency.

Even if a context switch is really fast (have a look on to see cost in CPU cycle), Franck (who is working at CERN) explained us that during this time a proton can do a complete round of the CERN Large Hadron Collider (27km).

Anyway it has been really interesting to see that it will be possible in the future to load javascript in an Oracle Database using MLE.

Cet article Oracle Open World 2018 D1: Microservices Get Rid of Your DBA and Send the DB into Burnout est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D1: Larry Ellison keynote Cloud Gen2

Mon, 2018-10-22 21:15

Today (23.10.2018) it was the first time I attended to a Keynote from Larry Ellison and I haven’t been disappointed. Not because of technical information or about true facts and figures but simply because of his charism and Oracle capability to transform a keynote in a real show. Whatever he is speaking about and whatever if it’s true or not, I’ve to admit that it was really distracting and funny.

Oracle Cloud Generation 2 - intro

As expected during this keynote, Larry’s keywords were:

1. Cloud
2. Autonomous
3. Machine Learning
4. RoBots
5. Amazon

During the entire keynote parallels have been made between the cloud and an autonomous car. “Sit back, relax and let Oracle do the driving” You can find most of this comparison on auto.oracle.com.

Oracle Cloud Generation 2 - Car
Cloud Computing Generation 2 – new architecture

Secure Infrastructure requires new hardware and new software

  1. Impenetrable Barrier: Dedicated Network of Cloud Control Computers
    • Barrier: Dedicated Network of Cloud Control Computers. Cloud Control Computers protect cloud perimeter and customer Zones.
    • Impenetrable: No Customer access to cloud control computers and Memory
  2. Autonomous RoBots: AI/ML RoBots Find and kill Threats
    • Database immediately Patches Itself while running – Stop Data Theft
    • No delay for human process or downtime
    • No longer Our People versus Their robots – Our Robots vs their Robots

One of the key difference between Cloud Generation 1 vs Cloud Generation 2 is the Cloud Control Computer architecture as presented below.

Cloud Generation 2 - Architecture

First Generation Clouds Built on Decade Old Technology

Comparing to the second generation Cloud Generation 1 was

  • Designed for building “angry birds”
  • Not meant to run our mission critical business applications
  • Security was an afterthought
  • Pay significantly more for higher performance
  • Cloud way or no way – Not meant to move your datacenter to the cloud

In comparison Cloud Generation 2 is one unified architecture:

  • Foundation for autonomous database
  • Extensible platform for Saas Applications
  • Runs Enterprise Applications and Cloud Native Applications
  • Gen 2 public Cloud Available Now – Gen 2 Cloud@Customer 2019
  • Easy Free Push Button Upgrade From Gen 1 to Gen 2 Database@Customer

Oracle Generation 2 Other design goals are:

  • Support better functionality and performance than on premises
  • Improve automation (Eliminate mundane tasks, managing upgrade, security patches)
  • Easily connect between your datacenter and ours securely
  • Guarantee consistent enterprise performance (Industry first SLA that covers availability, performance and manageability)
Oracle Performance vs Amazon Performance

Such a keynote wouldn’t have been complete without a performance and price comparison between Oracle Cloud and Amazon AWS. As “expected” Oracle Gen2 performs much better for lower cost. Many slides and even “demos” have been done in order to prove how fast is Oracle database comparing to Amazon’s databases.

  • Performance Benchmarks 3 to 100 Times faster than Amazon’s Databases
  • Oracle Autonomous Data Warehouse vs Amazon Redshift – 9x Faster and 8x cheaper
  • Oracle Autonomous Transaction Processing vs Amazon Aurora – 11x faster and 8x cheaper
  • Oracle Autonomous Transaction Processing vs Amazon aurora mixed workloads – 100x faster and 80x cheaper
  • Autonomous Transaction Processing vs Oracle on RDS – 3x faster and 2x cheaper
  • Autonomous Transaction Procssing vs Oracle on RDS while patching – Infinitely Faster and Infinitely Cheaper

No comment…

But for those of you who wants to compare, Oracle graciously offers 2 TB Autonomous Database for 3’300 hours on https://cloud.oracle.com/try-autonomous-database

Larry Ellison spokes also about the low latency and high Bandwidth RDMA Cluster Networking.

Oracle Cloud Generation 2 Security

A special focus on security has been done in order to explain that Cloud Gen 2 is fully secure and will protect you against any threats:

  • Compliance: Service, tools AI/ML to monitor your cloud infrastructure
  • Edge Security: DDos, DNS, WAF
  • Access Security: Identify, resource access management
  • Autonomous Database: Autonomous database: self-patching, self-repairing
  • Data Security: At-rest and in-transit encryption, key management
  • Network Security: Cloud Control Computers: private encrypted backbone
  • Isolation: Full physical isolation from other tenants and Oracle
Oracle Cloud Generation 2 Availability

Availability of Oracle Cloud Generation 2:

  • New Infrastructure and Database Customers get Gen 2 NOW
  • Database Cloud@Customer upgraded to Gen2: Summer 2019 (Database Cloud@Customer free upgrade autonomous database Cloud@Customer)
  • Full OCI Cloud@Customer: Calendar 2019 (Complete Gen 2 Cloud on your premise, under your control)
Oracle Autonomous Database

Everything is Automated: Nothing to learn, nothing to do

  • Automatic Provisioning
  • Automatic Scaling
  • Automatic Tuning
  • Automatic Security
  • Automatic Fault Tolerant Failover
  • Automatic Backup and Recovery
  • And More

Core Message: Easiest to Use  Lowest Cost to Operate. All of that makes Oracle 25x more reliable that Amazon.

Larry also presented real ADW (Autonomous DataWarehouse) use cases (ADW in Action On customer workloads) where he compared customer tuned real warehouse workloads to ADW. Conclusion: ADW consistently exceeds hand-tuned performance and in addition ADW stays tuned as workload changes. Of course the conclusions are the same for ATP (Autonomous Transaction Processing)

Cloud Generation 2 - Performance

Some sentences I caught:

About Oracle Cloud:

  • “Security built in from the center to the outside, from the perimeter to the inside”
  • “The most important component of the generation 2 cloud is the autonomous database and we did lots of progress since last year”.
  • “We have autonomous robots searching for threats. We search and destroy the threats”… “that’s our robots vs their robots”
  • “We eliminate human labor and human errors”

About Amazon:

  • “They should have a funny contract with their internet provider, price is depending on the way data move from or towards their cloud”
  • “Move out the AWS database cloud cost a lot.”…” Move data in the Oracle cloud that’s done.”
  • “AWS is a semi-autonomous database”… “semi-autonomous database, you drive it and you die.”… “Our database is fully autonomous”
  • “We guarantee that we cut the bill by half regarding amazon”
  • “Oracle database is 25x more available and reliable than the amazon database”
  • “Oracle is the best database that you can run on Amazon”

Cloud Generation 2 - Larry Ellison

Cet article Oracle Open World 2018 D1: Larry Ellison keynote Cloud Gen2 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D 0: Bike trip starting on Golden Gate

Sun, 2018-10-21 23:36

Today (22.10.2018) my colleague Mouhamadou and me had the opportunity to make a bike trip (#BikeB4OOW) organized by Bryn Llewellyn product manager for Oracle PL/SQL and Edition Based Redefinition (EBR). We were well accompagnied with several other famous people such as Franck Pachot, Mike Dietrich, Pieter Van Puymbroeck, Liesbeth Van Raemd and Ivaca Arsov.. Just to name few of them.

Oracle Biking Team

We started our trip beside the Golden Gate at the Welcome Center at 10:00 am in direction of Sausalito and we kept on along the coast until Tiburon that we reached at about 12:00. There we splitted our group between the ones who would enjoy a meal and take the ferry and another group which prefered to come back by bike.

Golden Gate Bike trip

Mouhamadou and me accompanied by Franck, Pieter, Liesbeth and Ivica retained the first option and enjoyed a delicious meal at the Servino Ristaurante.

Lunch Time in Tiburon

We then went for a small digestion walk on Paradise drive taking some sea lions&heron’s pictures but also some selfies.

Heron in San Francisco

Finally we took the ferry to reach North Beach and bringing back our bikes.  It was the opportunity to have a wonderful view on Alcatraz Island, San Francisco and the Golden Gate bridge.

Alcatraz and San Francisco

Because a blog speaking about a bike trip starting on the Golden Gate, without even a Golden Gate picture is not imaginable I made a small detour before giving back my bike to catch a picture…

Golden Gate

On the way back to the hotel we caught our Oracle Pass at Moscone Center in order to attend to the sessions tomorrow.

The Oracle Pass

So tomorrow no sightseeing and no bike trip on the programm but I very do hope lot of interesting technical sessions and fun. For sure I will attend to the Larry Ellison Keynote “Cloud Generation 2″ and I’m pretty sure that I’m going to heard about autonomous tasks, AI, security and cloud ;-).

Greetings from San Francisco

Cet article Oracle Open World 2018 D 0: Bike trip starting on Golden Gate est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 0

Sun, 2018-10-21 22:58

Today was the day 0 for the oracle Openworld 2018. The event is officially not started but there were many nice entertainments to discover and enjoy San Francisco.
The one Gregory Steulet (CFO of dbi services) and myself participated was the bike ride organized by Bryn Llewellyn who is the PL/SQL Product Manager. There were also Mike Dietrich (Master Product Manager) ,Franck Pachot (Oracle Master 12c and ACE Director) and many other people

After renting bikes, we start the tour on the Goldengate Bridge. It was wonderful. The weather was not so bad.
Captureday01
After the Goden gate Bridge, we continue around San Francisco Bay and we did many stops to fill the batteries.

Captureday02

After a long biking and after a good lunch, we took the boat to go back.

Tomorrow is the big day, the official start of the OpenWorld 2018 with multiple sessions.

Sure that we will come again to summarize the sessions we will attend

Cet article Oracle OpenWorld 2018: Day 0 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 J-1

Sun, 2018-10-21 01:10

I woke up this morning (20.10.2018) knowing that this day will be a very long day. Indeed today my Colleague Mouhamadou and me traveled to San Francisco in order to attend to the Oracle Open World (OOW). I travelled from Delémont to Zurich, meaning about 1h30 and then I caught my flight from Zurich at 13:10 (GMT+2) to land in San Francisco about 12 hours laters at 16h10 (GMT-7).

Of course when you arrive on American soil and you are not U.S. citizen, you have to face the queue before the customs… about 1h45 of queue in my case. Then we caught the shuttle for the Handlery Union Square Hotel and finally arrive at 18h30 (GMT-7).

Queue before the customs

After dropping our bags in the hotel, we decided to go for a lunch on Mikkeller Bar on 34 Mason St. where many Oracle people and consultants where seating for a beer.

Mikkeller Bar

During the next 4 days, Mouhamadou and me will share information regarding our Oracle Open World experience, product keynotes, sessions, aso , so do not hesitate to follow us either on twitter or on dbi services blog platform ! You can also find some San Francisco pictures on my Instagram account.

For me now it’s time to sleep… see you tomorrow for a ride tour with Bryn Llewellyn, Mouhamadou Diaw, Franck Pachot and many others on the Golden Gate before the Oracle Open World registration.

Good night !

Cet article Oracle Open World 2018 J-1 est apparu en premier sur Blog dbi services.

Schema only account with Oracle 18.3

Sat, 2018-10-20 09:00

With Oracle 18.3, we have the possibility to create schemas without a password. Effectively in a perfect world, we should not be able to connect to application schemas. For security reasons it is a good thing that nobody can connect directly to the application schema.

A good way is to use proxy connections, in fact connect as app_user but using the psi_user password for example:

Let’s create a user named app_user:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.

SQL> create user app_user identified by app_user
  2  quota unlimited on users;

User created.

SQL> grant create session , create table to app_user;

Grant succeeded.

Let’s create a proxy user named psi_user:

SQL> create user psi_user identified by psi_user;

User created.

SQL> grant create session to psi_user;

Grant succeeded.
We allow the proxy connection to the app_user:

SQL> alter user app_user grant connect through psi_user;

User altered.

Now we can connect via the proxy user using the following syntax:

SQL> connect psi_user[app_user]/psi_user@pdb 
Connected.

We can see we are connected as user app_user but using the psi_user password:

SQL> select sys_context('USERENV','SESSION_USER') as session_user,
sys_context('USERENV','SESSION_SCHEMA') as session_schema,
sys_context('USERENV','PROXY_USER') as proxy,
user
from dual;

SESSION_USER	SESSION_SCHEMA	        PROXY		USER
APP_USER	APP_USER		PSI_USER	APP_USER

But there is a problem, if the app_user is locked the proxy connection does not work anymore:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> alter user app_user account lock;

User altered.

SQL> connect psi_user[app_user]/psi_user@pdb
ERROR:
ORA-28000: The account is locked.

Warning: You are no longer connected to ORACLE.

The good solution is to use the schema only Oracle 18c new feature:

We drop the old accounts:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> drop user psi_user cascade;

User dropped.

SQL> drop user app_user cascade;

User dropped.

And we recreate them in the following way, we first create the schema owner with no authentication:

SQL> create user app_user no authentication
  2  quota unlimited on users;

User created.

SQL> grant create session , create table to app_user;

Grant succeeded.
We create the proxy user as before:
SQL> create user psi_user identified by psi_user;

We allow the proxy user to connect to the app_user:

SQL> alter user app_user grant connect through psi_user;

User altered.

We now can connect via psi_user:

SQL> connect psi_user[app_user]/psi_user@pdb
Connected.

And as the app_user has been created in no authentication, you receive the classical ORA-01017 error when you try to connect directly with the app_user account:

SQL> connect app_user/app_user@pdb
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Using no authentication is a good protection, but you cannot grant system privileges to such users:

SQL> grant sysdba to app_user;
grant sysdba to app_user
*
ERROR at line 1:
ORA-40366: Administrative privilege cannot be granted to this user.

We can try to alter the app_user with a password and grant it to sysdba but it does not work:

SQL> alter user app_user identified by password;

User altered.

SQL> grant sysdba to app_user;

Grant succeeded.

SQL> alter user app_user no authentication;
alter user app_user no authentication
*
ERROR at line 1:
ORA-40367: An Administrative user cannot be altered to have no authentication
type.

SQL> revoke sysdba from app_user;

Revoke succeeded.

SQL> alter user app_user no authentication;

User altered.

To understand correctly the behavior, I made the following test:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
I remove the no authentication:

SQL> alter user app_user identified by app_user;

User altered.

Now I can connect on the app_user schema, I create a table and insert some values:

SQL> connect app_user/app_user@pdb
Connected.
SQL> create table employe (name varchar2(10));

Table created.

SQL> insert into employe values('Larry');

1 row created.

SQL> commit;

Commit complete.

I reset the app_user to no authentication:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> alter user app_user no authentication;

User altered.

I connect with the proxy user, I can display the employe table content:

SQL> connect psi_user[app_user]/psi_user@pdb
Connected.
SQL> select * from employe;

NAME
----------
Larry

The table belongs to the app_user schema:

SQL> select object_name, object_type, owner from all_objects where object_name ='EMPLOYE';

OBJECT_NAME	OBJECT_TYPE	OWNER
EMPLOYE		TABLE		APP_USER
SQL> insert into employe values ('Bill');

1 row created.

SQL> commit; 

Commit complete.

SQL> select * from employe;

NAME
----------
Larry
Bill

What is the behavior in the audit trail ?

We create an audit policy to detect any table creation:

SQL> create audit policy psi_user_audit_policy
  2  privileges create table
  3  when 'SYS_CONTEXT(''USERENV'',''SESSION_USER'') = ''APP_USER'''
  4  evaluate per session
  5 container=current

Audit policy created.

SQL> audit policy psi_user_audit_policy whenever successful;

Audit succeeded.
If now we have a look at the unified_audit_trail view:

SQL> select event_timestamp, dbusername, dbproxy_username from unified_audit_trail where object_name = 'SALARY' and action_name = 'CREATE TABLE'

EVENT_TIMESTAMP		DBUSERNAME	DBPROXY_USERNAME
16-OCT-18 03.40.49	APP_USER	PSI_USER

We can identify clearly the proxy user in the audit trail.

Conclusion:

The schema only accounts is an interesting new feature. In resume we can create a schema named app_user and set the authentication to NONE, the consequence is that you cannot be logged in. We can create a proxy account named psi_user which connects through app_user and we can create tables , views … to this app_user schema.








Cet article Schema only account with Oracle 18.3 est apparu en premier sur Blog dbi services.

Monitoring Linux With Nmon

Thu, 2018-10-18 09:37

I was looking for tools to monitor linux servers and I found an interesting one nmon ( short for Nigel’s Monitor). I did some tests. In this blog I am describing how to install nmon and how we can use it
I am using a Oracle Enterprise Linux System.

[root@condrong nmon]# cat /etc/issue
Oracle Linux Server release 6.8
Kernel \r on an \m

[root@condrong nmon]#

For the installation I used the repository epel

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm 
yum search nmon
yum install nmon.x86_64

Once installed, the tool is launched by just running the nmon command

[root@condrong nmon]# nmon

nmon1

If we type c we have CPU statistics
nmon2
If we type m we have memory statistics
nmon3
If we type t we can see Top Processes and so on
nmon4

nmon can be also scheduled. The data are collected in a file and this file can be analyzed later. For this we can use following options

OPTIONS
       nmon follow the usual GNU command line syntax, with long options starting
       with  two  dashes  (‘-’).   nmon  [-h] [-s ] [-c ] [-f -d
        -t -r ] [-x] A summary of options is included below.

       -h     FULL help information

              Interactive-Mode: read startup banner and type:  "h"  once  it  is
              running For Data-Collect-Mode (-f)

       -f            spreadsheet output format [note: default -s300 -c288]
              optional

       -s   between refreshing the screen [default 2]

       -c    of refreshes [default millions]

       -d     to increase the number of disks [default 256]

       -t            spreadsheet includes top processes

       -x            capacity planning (15 min for 1 day = -fdt -s 900 -c 96)

In my example I just create a file my_nmon.sh and execute the script

[root@condrong nmon]# cat my_nmon.sh 
#! /bin/bash
nmon -f -s 60 -c 30

[root@condrong nmon]# chmod +x my_nmon.sh 
[root@condrong nmon]# ./my_nmon.sh

Once executed, the script will create a file in the current directory with an extension .nmon

[root@condrong nmon]# ls -l *.nmon
-rw-r--r--. 1 root root 55444 Oct 18 09:51 condrong_181018_0926.nmon
[root@condrong nmon]#

To analyze this file, we have many options. For me I downloaded the nmon_analyzer
This tool works with Excel 2003 on wards and supports 32-bit and 64-bit Windows.
After copying my nmon output file in my windows station, I just have to launch the excel file and then use the button Analyze nmon data
nmon5
And below I show some graphs made by the nmon_analyzer
nmon6

nmon7

nmon8

Conclusion
As we can see nmon is a very useful tool which can help monitoring our servers. It works also for Aix systems.

Cet article Monitoring Linux With Nmon est apparu en premier sur Blog dbi services.

Inheriting super user privileges over a role automatically in PostgreSQL

Tue, 2018-10-16 09:18

In a recent project at a customer where we synchronize the users and group out of Active Directory we hit a little issue I was not aware of before. Suppose you have created a role in PostgreSQL, you made that role a superuser and then granted that role to another role. What happens when you login using the other role? Will you have the super user privileges by default? Sounds confusing, I know, so lets do a test.

To start with we create a simple role and make that role a super user:

postgres=# create role my_admin;
CREATE ROLE
postgres=# alter role my_admin superuser;
ALTER ROLE

Of course you could also do that in one step:

postgres=# create role my_admin superuser;
CREATE ROLE

As a second step lets create a new user that is a member of the admin group and inherits the permissions of that role automatically:

postgres=# create user my_dba login password 'admin' in role my_admin inherit;
CREATE ROLE
postgres=# \du
                                    List of roles
 Role name |                         Attributes                         | Member of  
-----------+------------------------------------------------------------+------------
 my_admin  | Superuser, Cannot login                                    | {}
 my_dba    |                                                            | {my_admin}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

The questions now is: When we login using the my_dba user are we superuser automatically?

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -U my_dba postgres
psql (12devel)
Type "help" for help.

postgres=> \du
                                    List of roles
 Role name |                         Attributes                         | Member of  
-----------+------------------------------------------------------------+------------
 my_admin  | Superuser, Cannot login                                    | {}
 my_dba    |                                                            | {my_admin}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=> create database db1;
ERROR:  permission denied to create database
postgres=> 

… and we are not. What we can do is:

postgres=> set role my_admin;
SET
postgres=# create database db1;
CREATE DATABASE

The reason for that is that some privileges are not inherited automatically and these are: LOGIN, SUPERUSER, CREATEDB, and CREATEROLE.

What you can do is put something like that into “.psqlrc”:

set role my_admin

… or do it like that:

postgres=# alter user my_dba set role my_admin;
ALTER ROLE

This will explicitly set the role with each login and the super user privileges will be there. When you have a bit more complicated scenario where roles are assigned based on patterns in the username you could do something like this and add it to .psqlrc as well (or put that into a file and then execute that file in .psqlrc):

DO $$
DECLARE
  lv_username pg_roles.rolname%TYPE := current_user;
BEGIN
  if ( substr(lv_username,1,2) = 'xx'
       and
       position ('yy' in lv_username) > 0
     )
  then
    execute 'set role my_admin';
  end if;
  perform 1;
END $$;

… or whatever checks you need to identify the correct user names. Hope that helps …

 

Cet article Inheriting super user privileges over a role automatically in PostgreSQL est apparu en premier sur Blog dbi services.

Where come from Oracle CMP$ tables and how to delete them ?

Tue, 2018-10-16 05:27

Regarding the following “MOS Note Is Table SCHEMA.CMP4$222224 Or Similar Related To Compression Advisor? (Doc ID 1606356.1)”,
we know that since Oracle 11.2.0.4 BP1 or Higher, due to the failure of Compression Advisor some tables with names
that include “CMP”, created “temporary – the time the process is running” by Compression Advisor process (ie CMP4$23590) are not removed from the database as that should be the case.
How theses tables are created ? How to “cleanly” remove them ?

1.Check None CMP tables exist.

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         0

2. Check there is no compression enabled for the table we will use to test the Compression Advisor.

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

3.Execute the Compression Advisor procedure

The procedure DBMS_COMPRESSION.get_compression_ratio analyzes the compression ratio of a table, and gives information about compressibility of a table.
For information, Oracle Database 12c include a number of enhancements to the DBMS_COMPRESSION package such as In-Memory Compression or Advanced Compression.

Let’s executing the DBMS_COMPRESSION.get_compression_ratio procedure:

SQL> 
alter session set tracefile_identifier = 'CompTest1110201815h51';
alter session set events '10046 trace name context forever, level 12';
set serveroutput on

DECLARE
  l_blkcnt_cmp    PLS_INTEGER;
  l_blkcnt_uncmp  PLS_INTEGER;
  l_row_cmp       PLS_INTEGER;
  l_row_uncmp     PLS_INTEGER;
  l_cmp_ratio     NUMBER;
  l_comptype_str  VARCHAR2(32767);
BEGIN
  DBMS_COMPRESSION.get_compression_ratio (
    scratchtbsname  => 'USERS',
    ownname         => 'TEST_LAF',
    objname         => 'FOO',
    subobjname      => NULL,
    comptype        => DBMS_COMPRESSION.comp_advanced,
    blkcnt_cmp      => l_blkcnt_cmp,
    blkcnt_uncmp    => l_blkcnt_uncmp,
    row_cmp         => l_row_cmp,
    row_uncmp       => l_row_uncmp,
    cmp_ratio       => l_cmp_ratio,
    comptype_str    => l_comptype_str,
    subset_numrows  => DBMS_COMPRESSION.comp_ratio_allrows,
    objtype         => DBMS_COMPRESSION.objtype_table
  );

  DBMS_OUTPUT.put_line('Number of blocks used (compressed)       : ' ||  l_blkcnt_cmp);
  DBMS_OUTPUT.put_line('Number of blocks used (uncompressed)     : ' ||  l_blkcnt_uncmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (compressed)   : ' ||  l_row_cmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (uncompressed) : ' ||  l_row_uncmp);
  DBMS_OUTPUT.put_line('Compression ratio                        : ' ||  l_cmp_ratio);
  DBMS_OUTPUT.put_line('Compression type                         : ' ||  l_comptype_str);
END;
/

Number of blocks used (compressed)       : 1325
Number of blocks used (uncompressed)     : 1753
Number of rows in a block (compressed)   : 74
Number of rows in a block (uncompressed) : 55
Compression ratio                        : 1.3
Compression type                         : "Compress Advanced"

PL/SQL procedure successfully completed.

4.Which “CMP internal” tables are created by DBMS_COMPRESSION.get_compression_ratio ?

To handle the compression advisor process, Oracle creates 4 CMP* tables : CMP1$23590, CMP2$23590, CMP3$23590, CMP4$23590.

Strangely, Oracle Trace 10046 files contains only DDL for the creation of the last 2 ones (we can also use LogMinner to find the DDL) : CMP3$23590, CMP4$23590.
The table CMP3$23590 is a copy of the source table.
The table CMP4$23590 is a copy “compressed” of CMP3$23590 table.

grep  "CMP*" DBI_ora_20529_CompTest1110201823h19.trc

drop table "TEST_LAF".CMP1$23590 purge
drop table "TEST_LAF".CMP2$23590 purge
drop table "TEST_LAF".CMP3$23590 purge
drop table "TEST_LAF".CMP4$23590 purge
create table "TEST_LAF".CMP3$23590 tablespace "USERS" nologging  as select /*+ DYNAMIC_SAMPLING(0) FULL("TEST_LAF"."FOO") */ *  from "TEST_LAF"."FOO"  sample block( 99) mytab
create table "TEST_LAF".CMP4$23590 organization heap  tablespace "USERS"  compress for all operations nologging as select /*+ DYNAMIC_SAMPLING(0) */ * from "TEST_LAF".CMP3$23590 mytab
drop table "TEST_LAF".CMP1$23590 purge
drop table "TEST_LAF".CMP2$23590 purge
drop table "TEST_LAF".CMP3$23590 purge
drop table "TEST_LAF".CMP4$23590 purge

As we can see above, the “internal” tables (even the one compressed CMP4$23590) are removed at the end of the process.

To be sure, we check in the database :

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         0

So, everything is fine, no ‘CMP’ tables exist and the source table is not compressed :

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

5.But what happens if DBMS_COMPRESSION.get_compression_ratio fails ?

Let’s forcing the failure of the DBMS_COMPRESSION.get_compression_ratio procedure…

SQL> 
alter session set tracefile_identifier = 'CompTest1410201822h03';
alter session set events '10046 trace name context forever, level 12';
set serveroutput on

DECLARE
  l_blkcnt_cmp    PLS_INTEGER;
  l_blkcnt_uncmp  PLS_INTEGER;
  l_row_cmp       PLS_INTEGER;
  l_row_uncmp     PLS_INTEGER;
  l_cmp_ratio     NUMBER;
  l_comptype_str  VARCHAR2(32767);
BEGIN
  DBMS_COMPRESSION.get_compression_ratio (
    scratchtbsname  => 'USERS',
    ownname         => 'TEST_LAF',
    objname         => 'FOO',
    subobjname      => NULL,
    comptype        => DBMS_COMPRESSION.comp_advanced,
    blkcnt_cmp      => l_blkcnt_cmp,
    blkcnt_uncmp    => l_blkcnt_uncmp,
    row_cmp         => l_row_cmp,
    row_uncmp       => l_row_uncmp,
    cmp_ratio       => l_cmp_ratio,
    comptype_str    => l_comptype_str,
    subset_numrows  => DBMS_COMPRESSION.comp_ratio_allrows,
    objtype         => DBMS_COMPRESSION.objtype_table
  );
 24
  DBMS_OUTPUT.put_line('Number of blocks used (compressed)       : ' ||  l_blkcnt_cmp);
  DBMS_OUTPUT.put_line('Number of blocks used (uncompressed)     : ' ||  l_blkcnt_uncmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (compressed)   : ' ||  l_row_cmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (uncompressed) : ' ||  l_row_uncmp);
  DBMS_OUTPUT.put_line('Compression ratio                        : ' ||  l_cmp_ratio);
  DBMS_OUTPUT.put_line('Compression type                         : ' ||  l_comptype_str);
END;
 32  /
DECLARE
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation

What “CMP*” tables persist after ?

Two “CMP*” tables is always present :

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         2

SQL> select owner,table_name from dba_tables where table_name like 'CMP%';

OWNER     TABLE_NAME
------- ----------
TEST_LAF  CMP3$23687
TEST_LAF  CMP4$23687


Since “CMP3*” and “CMP4*” are copy (compressed for the second one) of source table, space disk can increase dramatically if Compressoin Advisor fails frequently and mainly with huge tables, so it’s important to remove these tables.

The source table called FOO, CMP3$23687 and CMP4$23687 internal tables contains same set of data (less for the last 2 ones since we use the sample block option)…

SQL> select count(*) from test_laf.CMP3$23687;

  COUNT(*)
----------
     22147

SQL> select count(*) from test_laf.CMP4$23687;

  COUNT(*)
----------
     22147

SQL> select count(*) from test_laf.foo;

  COUNT(*)
----------
     22387

The worst is that now we are in presence of compressed table while we don’t have the compression license option :

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'CMP4$23687';

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED  ADVANCED

To remove the oracle “CMP*” internal tables tables, let’s analyzing the 10046 trace file to check how oracle remove these tables when the DBMS_COMPRESSION.get_compression_ratio procedure run successfully:

Find below all the steps that oracle does to drop these tables:

drop table "TEST_LAF".CMP1$23687 purge

BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;

drop table "TEST_LAF".CMP2$23687 purge

PARSING IN CURSOR #140606951937256 len=515 dep=2 uid=0 oct=47 lid=0 tim=3421988631 hv=2219505151 ad='69fd11c8' sqlid='ct6c4h224pxgz'
BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;


drop table "TEST_LAF".CMP3$23687 purge

BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;


drop table "TEST_LAF".CMP4$23687 purge
BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;

To remove “CMP*” tables, Oracle does :
– drop table *** purge
– call internal procedure : xdb.XDB_PITRIG_PKG.pitrig_truncate or xdb.XDB_PITRIG_PKG.pitrig_dropmetadata regarding if Oracle Virtual Private Database is used.

7. Last Test : Check the source table is not compressed, we don’t want to have the compression enabled since we are not licensing…

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

6.Conclusion

To drop “CMP*” tables used by the DBMS_COMPRESSION.get_compression_ratio procedure, just execute : drop table CMP* purge.

I have not tested more in details the case where compression is used into Oracle VPD, so I don’t know the impact of executing the system procedure : xdb.XDB_PITRIG_PKG.pitrig_truncate or xdb.XDB_PITRIG_PKG.pitrig_dropmetadata in case we use VPD.

 

Cet article Where come from Oracle CMP$ tables and how to delete them ? est apparu en premier sur Blog dbi services.

Getting started with Red Hat Satellite – Initial configuration

Mon, 2018-10-15 04:13

In the last post it was all about the Installation of Red Hat Satellite and the components that are used under the hood. In this post it is about doing the initial configuration so that packages can be synchronized from the Red Hat content network and subscriptions are available to be consumed by clients. This requires some initial tasks to be done in the Satellite itself but also in the Red Hat portal where you can manage your subscriptions.

When you log on to the Satellite Console for the first time you’ll land on the monitoring page:
Selection_066
As we currently do not have any systems managed by this Satellite system the overview is quite boring at the moment. The first task we need to do for going further is to create an “Organization”. Organizations are logical groups you should use to divide your infrastructure to whatever makes sense in your case, could be a division, could be based on content, whatever. This can either be done using the Console but it can also be done by using the command line utility hammer. For the moment there is just the “Default Organization”:
Selection_068

[root@satellite lib]$ hammer organization list
---|----------------------|----------------------|-------------|----------------------|------------
ID | TITLE                | NAME                 | DESCRIPTION | LABEL                | DESCRIPTION
---|----------------------|----------------------|-------------|----------------------|------------
1  | Default Organization | Default Organization |             | Default_Organization |            
---|----------------------|----------------------|-------------|----------------------|------------

Lets create a new one we will use throughout this series of posts:

[root@satellite lib] hammer organization create --name "dbi services" --label "dbi-services" --description "dbi services headquarter"
Organization created
[root@satellite lib] hammer organization list
---|----------------------|----------------------|--------------------------|----------------------|-------------------------
ID | TITLE                | NAME                 | DESCRIPTION              | LABEL                | DESCRIPTION             
---|----------------------|----------------------|--------------------------|----------------------|-------------------------
3  | dbi services         | dbi services         | dbi services headquarter | dbi-services         | dbi services headquarter
1  | Default Organization | Default Organization |                          | Default_Organization |                         
---|----------------------|----------------------|--------------------------|----------------------|-------------------------

Once we have that we can switch to our new organization in the Console:
Selection_070

For being able to browse the content of our Organization we need to create a so called “Debug Certificate”. This can be done by switching to the “Administer->Organizations” screen:
Selection_072

Once you selected the organization the certificate can be generated:
Selection_073

For importing that certificate into Firefox it needs to be converted. Using the generated certificate file create two files, one containing the private key section and another one containing the certificate section, like here:

dwe@dwe:~/Downloads$ cat key.pem 
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAroq3rZuJ.....Q5eCqquzW4/Ie7SI3MQZQ==
-----END RSA PRIVATE KEY-----

dwe@dwe:~/Downloads$ cat cert.pem 
-----BEGIN CERTIFICATE-----
MIIG8jCCBdqgAwIB....Ix55eToRfqUZLzcAlrOFTaF8UrbDOoFTJldF
wxDDBpzk
-----END CERTIFICATE-----

These files can now be used to create a certificate that can be imported into Firefox:

dwe@dwe:~/Downloads$ openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in cert.pem -inkey key.pem -out dbi-services.pfx -name dbiservices
Enter Export Password:
Verifying - Enter Export Password:
dwe@dwe:~/Downloads$ ls -la *pfx*
-rw------- 1 dwe dwe 3460 Oct 12 10:36 dbi-services.pfx

Selection_074

Once we have that in Firefox we are able to browse the organization’s repository. The URL is of the following format: https://[SATELLITE-MACHINE]/pulp/repos/[ORGANIZATION-LABEL], so in my case: https://192.168.22.11/pulp/repos/dbi-services

Selection_075

For being able to attach systems to the Satellite need to create a so called “Subscription Allocation” in the Red Hat portal:
Selection_075
Selection_076
Selection_077

Once we have the “Subscription Allocation” we need to add subscriptions to it (I will use just two for the scope of this post):
Selection_078
Selection_079
Selection_080
Selection_081

These definitions need to be exported as a “Manifest” that then can be imported into our Satellite:
Selection_082

Importing the Manifest into the Satellite is done in the “Content > Red Hat Subscriptions” section:
Selection_084
Selection_086
Selection_087

From now on we are ready to synchronize content from the Red Hat content network. Again this can either be done via the Console or on the command line. To list all the products available using the hammer command line utility:

[root@satellite ~]$ hammer product list --organization "dbi services"
----|----------------------------------------------------------------------------------|-------------|--------------|--------------|-----------
ID  | NAME                                                                             | DESCRIPTION | ORGANIZATION | REPOSITORIES | SYNC STATE
----|----------------------------------------------------------------------------------|-------------|--------------|--------------|-----------
12  | dotNET on RHEL Beta for RHEL Server                                              |             | dbi services | 0            |           
99  | dotNET on RHEL for RHEL Server                                                   |             | dbi services | 0            |           
54  | MRG Realtime                                                                     |             | dbi services | 0            |           
1   | Oracle Java for RHEL Client                                                      |             | dbi services | 0            |           
94  | Oracle Java for RHEL Compute Node                                                |             | dbi services | 0            |           
18  | Oracle Java for RHEL Compute Node - Extended Update Support                      |             | dbi services | 0            |           
38  | Oracle Java for RHEL Server                                                      |             | dbi services | 0            |           
...

Each of those “products” are also a repository. To list the product set for Red Hat Enterprise Linux using hammer:

[root@satellite ~]$ hammer repository-set list --product "Red Hat Enterprise Linux Server" --organization "dbi services"
-----|-----------|---------------------------------------------------------------------------------
ID   | TYPE      | NAME                                                                            
-----|-----------|---------------------------------------------------------------------------------
2008 | yum       | Red Hat Enterprise Linux 4 AS Beta (RPMs)                                       
7750 | yum       | Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (Debug RPMs)                    
2009 | yum       | Red Hat Enterprise Linux 4 AS Beta (Source RPMs)                                
2006 | yum       | Red Hat Enterprise Linux 4 AS Beta (Debug RPMs)                                 
2007 | file      | Red Hat Enterprise Linux 4 AS Beta (ISOs)                                       
7752 | yum       | Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (Source RPMs)                   
7751 | yum       | Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (RPMs)                          
...

For the scope of this blog post we are only interested in the Satellite tools and the latest Enterprise Linux products, so we enable only those two. For enabling the first one using the command line:

[root@satellite ~]$ hammer repository-set enable --name "Red Hat Enterprise Linux 7 Server (RPMs)" \
                                                 --releasever "7Server" \
                                                 --basearch "x86_64" \
                                                 --product "Red Hat Enterprise Linux Server" \
                                                 --organization "dbi services"
Repository enabled

For enabling the second one using the Console go to “Content->Red Hat Subscriptions”, locate the “Red Hat Enterprise Linux” section, expand that and enable “Red Hat Satellite Tools 6.3 (for RHEL 7 Server) (RPMs)”:

Selection_067
Selection_068
Selection_069

To verify what we did:

[root@satellite ~]$ hammer repository list --product "Red Hat Enterprise Linux Server" --organization "dbi services"
---|-----------------------------------------------------------|---------------------------------|--------------|---------------------------------------------------------------------------------
ID | NAME                                                      | PRODUCT                         | CONTENT TYPE | URL                                                                             
---|-----------------------------------------------------------|---------------------------------|--------------|---------------------------------------------------------------------------------
1  | Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server     | Red Hat Enterprise Linux Server | yum          | https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os             
2  | Red Hat Satellite Tools 6.3 for RHEL 7 Server RPMs x86_64 | Red Hat Enterprise Linux Server | yum          | https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/sat-tools/6....
---|-----------------------------------------------------------|---------------------------------|--------------|---------------------------------------------------------------------------------

Finally we need to synchronize these repositories (this will take some time):

[root@satellite ~]$ hammer product synchronize --name "Red Hat Enterprise Linux Server" --organization "dbi services"

You can monitor the progress in the Console:
Selection_071
Selection_072
Selection_074

Btw: When you want to check the overall status of the components can do do it like this on the command line:

[root@satellite pulp]$ hammer ping
candlepin:      
    Status:          ok
    Server Response: Duration: 19ms
candlepin_auth: 
    Status:          ok
    Server Response: Duration: 66ms
pulp:           
    Status:          ok
    Server Response: Duration: 111ms
pulp_auth:      
    Status:          ok
    Server Response: Duration: 52ms
foreman_tasks:  
    Status:          ok
    Server Response: Duration: 1163ms

That’s it for the basic configuration. Satellite is up and running and we have content clients can consume. In the next post we’ll attach a new Red Hat Linux installation to the Satellite.

 

Cet article Getting started with Red Hat Satellite – Initial configuration est apparu en premier sur Blog dbi services.

Getting started with Red Hat Satellite – Installation

Fri, 2018-10-12 06:44

This is the start of a series of posts I wanted to write for a long time: Getting started with Red Hat Satellite. Just in case you don’t know what it is, this statement from the official Red Hat website summarizes it quite well: “As your Red Hat® environment continues to grow, so does the need to manage it to a high standard of quality. Red Hat Satellite is an infrastructure management product specifically designed to keep Red Hat Enterprise Linux® environments and other Red Hat infrastructure running efficiently, properly secured, and compliant with various standards.”. In this first post it is all about the installation of Satellite and that is surprisingly easy. Lets go.

What you need as a starting point is a redhat Enterprise Linux minimal installation, either version 6 or 7. In my case it is the latest 7 release as of today:

[root@satellite ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Of course the system should be fully registered so you will be able to install updates / fixes and additional packages (and that requires a redhat subscription):

[root@satellite ~]$ subscription-manager list
+-------------------------------------------+
    Installed Product Status
+-------------------------------------------+
Product Name:   Red Hat Enterprise Linux Server
Product ID:     69
Version:        7.5
Arch:           x86_64
Status:         Subscribed
Status Details: 
Starts:         11/20/2017
Ends:           09/17/2019

As time management is critical that should be up and running before proceeding. For redhat Enterprise Linux chrony is the tool to go for:

[root@satellite ~]$ yum install -y chrony
[root@satellite ~]$ systemctl enable chronyd
[root@satellite ~]$ systemctl start chronyd

Satellite requires a fully qualified hostname so lets add that to the hosts file (of course you would do that with DNS in a real environment):

[root@satellite mnt]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.22.11 satellite.it.dbi-services.com satellite

As a Satellite server only makes sense when clients can connect to it a few ports need to be opened (not going into the details here, that will be the topic of another post):

[root@satellite ~]$ firewall-cmd --permanent \
                                 --add-port="53/udp" --add-port="53/tcp" \
                                 --add-port="67/udp" --add-port="69/udp" \
                                 --add-port="80/tcp"  --add-port="443/tcp" \
                                 --add-port="5000/tcp" --add-port="5647/tcp" \
                                 --add-port="8000/tcp" --add-port="8140/tcp" \
                                 --add-port="9090/tcp"

That’s basically all you need to do as preparation. There are several methods to install Satellite, I will use the downloaded iso as the source (what is called the “Disconnected Installation” what you will usually need in enterprise environments):

[root@satellite ~]$ ls -la /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso 
-rw-r--r--. 1 root root 3041613824 Oct 11 18:16 /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso

First of all the required packages need to be installed so we need to mount the iso:

[root@satellite ~]$ mount -o ro,loop /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso /mnt
[root@satellite ~]$ cd /mnt/
[root@satellite mnt]# ls
addons  extra_files.json  install_packages  media.repo  Packages  repodata  RHSCL  TRANS.TBL

Installing the packages required for Satellite is just a matter of calling the “install_packages” script:

[root@satellite mnt]$ ./install_packages 
This script will install the satellite packages on the current machine.
   - Ensuring we are in an expected directory.
   - Copying installation files.
   - Creating a Repository File
   - Creating RHSCL Repository File
   - Checking to see if Satellite is already installed.
   - Importing the gpg key.
   - Installation repository will remain configured for future package installs.
   - Installation media can now be safely unmounted.

Install is complete. Please run satellite-installer --scenario satellite

The output already tells us what to do next, executing the “satellite-installer” script (I will go with the defaults here but there are many options you could specify already here):

[root@satellite mnt]$ satellite-installer --scenario satellite
This system has less than 8 GB of total memory. Please have at least 8 GB of total ram free before running the installer.

Hm, I am running that locally in a VM so lets try to increase that at least for the time of the installation and try again:

[root@satellite ~]$ satellite-installer --scenario satellite
Installing             Package[grub2-efi-x64]                             [0%] [                                         ]

… and here we go. Some minutes later the configuration/installation is completed:

[root@satellite ~]$ satellite-installer --scenario satellite
Installing             Done                                               [100%] [.......................................]
  Success!
  * Satellite is running at https://satellite.it.dbi-services.com
      Initial credentials are admin / L79AAUCMJWf6Y4HL

  * To install an additional Capsule on separate machine continue by running:

      capsule-certs-generate --foreman-proxy-fqdn "$CAPSULE" --certs-tar "/root/$CAPSULE-certs.tar"

  * To upgrade an existing 6.2 Capsule to 6.3:
      Please see official documentation for steps and parameters to use when upgrading a 6.2 Capsule to 6.3.

  The full log is at /var/log/foreman-installer/satellite.log

Ready:
Selection_065

Before we go into some details on how to initially configure the system in the next post lets check what we have running. A very good choice (at least when you ask me :) ) is to use PostgreSQL as the repository database:

[root@satellite ~]$ ps -ef | grep postgres
postgres  1264     1  0 08:56 ?        00:00:00 /usr/bin/postgres -D /var/lib/pgsql/data -p 5432
postgres  1381  1264  0 08:56 ?        00:00:00 postgres: logger process   
postgres  2111  1264  0 08:57 ?        00:00:00 postgres: checkpointer process   
postgres  2112  1264  0 08:57 ?        00:00:00 postgres: writer process   
postgres  2113  1264  0 08:57 ?        00:00:00 postgres: wal writer process   
postgres  2114  1264  0 08:57 ?        00:00:00 postgres: autovacuum launcher process   
postgres  2115  1264  0 08:57 ?        00:00:00 postgres: stats collector process   
postgres  2188  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36952) idle
postgres  2189  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36954) idle
postgres  2193  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36958) idle
postgres  2194  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36960) idle
postgres  2218  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36964) idle
postgres  2474  1264  0 08:58 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2541  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36994) idle
postgres  2542  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36996) idle
postgres  2543  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36998) idle
postgres  2609  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2618  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2627  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2630  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2631  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2632  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2634  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2660  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2667  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2668  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2672  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2677  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2684  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2685  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2689  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
root      2742  2303  0 08:59 pts/0    00:00:00 grep --color=auto postgres

Lets quickly check if that is a supported version of PostgreSQL:

[root@satellite ~]$ cat /var/lib/pgsql/data/PG_VERSION 
9.2
[root@satellite ~]$ su - postgres
-bash-4.2$ psql
psql (9.2.24)
Type "help" for help.

postgres=# select version();
                                                    version                                                    
---------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.2.24 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Hm, 9.2 is already out of support. Nothing we would recommend to our customers but as long as redhat itself is supporting that it is probably fine. Just do not expect to get any fixes for that release from PostgreSQL community. Going a bit further into the details the PostgreSQL instance contains two additional users:

postgres=# \du
                             List of roles
 Role name |                   Attributes                   | Member of 
-----------+------------------------------------------------+-----------
 candlepin |                                                | {}
 foreman   |                                                | {}
 postgres  | Superuser, Create role, Create DB, Replication | {}

That corresponds to the connections to the instance we can see in the process list:

-bash-4.2$ ps -ef | egrep "foreman|candlepin" | grep postgres
postgres  2541  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36994) idle
postgres  2542  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36996) idle
postgres  2543  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36998) idle
postgres  2609  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2618  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2627  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2630  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2631  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2632  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2634  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2677  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2684  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2685  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2689  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  3143  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37114) idle
postgres  3144  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37116) idle
postgres  3145  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37118) idle
postgres  3146  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37120) idle
postgres  3147  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37122) idle

Foreman is responsible for the life cycle management and candlepin is responsible for the subscription management. Both are fully open source and can also be used on their own. What else do we have:

[root@satellite ~]$ ps -ef | grep -i mongo
mongodb   1401     1  0 08:56 ?        00:00:08 /usr/bin/mongod --quiet -f /etc/mongod.conf run
root      3736  2303  0 09:11 pts/0    00:00:00 grep --color=auto -i mongo

In addition to the PostgreSQL instance there is also a MongoDB process running. What is it for? It is used by Katello which is a Foreman plugin that brings “the full power of content management alongside the provisioning and configuration capabilities of Foreman”.

The next component is Pulp:

[root@satellite ~]# ps -ef | grep pulp
apache    1067     1  0 08:56 ?        00:00:03 /usr/bin/python /usr/bin/celery beat --app=pulp.server.async.celery_instance.celery --scheduler=pulp.server.async.scheduler.Scheduler
apache    1076     1  0 08:56 ?        00:00:02 /usr/bin/python /usr/bin/pulp_streamer --nodaemon --syslog --prefix=pulp_streamer --pidfile= --python /usr/share/pulp/wsgi/streamer.tac
apache    1085     1  0 08:56 ?        00:00:11 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache    1259     1  0 08:56 ?        00:00:12 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid --maxtasksperchild=2
apache    1684  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1685  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1686  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1687  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1688  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1689  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1690  1042  0 08:56 ?        00:00:01 (wsgi:pulp_forg -DFOREGROUND
apache    2002  1085  0 08:57 ?        00:00:00 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache   17757  1259  0 09:27 ?        00:00:00 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid --maxtasksperchild=2
root     18147  2303  0 09:29 pts/0    00:00:00 grep --color=auto pulp

This one is responsible “for managing repositories of software packages and making it available to a large numbers of consumers”. So far for the main components. We will have a more in depth look into these in one of the next posts.

 

Cet article Getting started with Red Hat Satellite – Installation est apparu en premier sur Blog dbi services.

How to migrate Grid Infrastructure from release 12c to release 18c

Fri, 2018-10-12 02:01

Oracle Clusterware 18c builds on this innovative technology by further enhancing support for larger multi-cluster environments and improving the overall ease of use. Oracle Clusterware is leveraged in the cloud in order to provide enterprise-class resiliency where required and dynamic as well as online allocation of compute resources where needed, when needed.
Oracle Grid Infrastructure provides the necessary components to manage high availability (HA) for any business critical application.
HA in consolidated environments is no longer simple active/standby failover.

In this blog we will see how to upgrade our Grid Infrastructure stack from 12cR2 to 18c.

Step1: You are required to patch your GI with the patch 27006180

[root@dbisrv04 ~]# /u91/app/grid/product/12.2.0/grid/OPatch/opatchauto apply /u90/Kit/27006180/ -oh /u91/app/grid/product/12.2.0/grid/

Performing prepatch operations on SIHA Home........

Start applying binary patches on SIHA Home........

Performing postpatch operations on SIHA Home........

[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u91/app/grid/product/12.2.0/grid successfully
OPatchAuto successful.

Step2: Check the list of patches applied

grid@dbisrv04:/u90/Kit/ [+ASM] /u91/app/grid/product/12.2.0/grid/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Lsinventory Output file location : /u91/app/grid/product/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-10-11_09-06-44AM.txt

--------------------------------------------------------------------------------
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.


Interim patches (1) :

Patch  27006180     : applied on Thu Oct 11 09:02:50 CEST 2018
Unique Patch ID:  21761216
Patch description:  "OCW Interim patch for 27006180"
   Created on 5 Dec 2017, 09:12:44 hrs PST8PDT
   Bugs fixed:
     13250991, 20559126, 22986384, 22999793, 23340259, 23722215, 23762756
........................
     26546632, 27006180

 

Step3: Upgrage the binaries to the release 18c

upgrade_grid

directory_new_grid

– recommend to run the rootUpgrade.sh script manually

run_root_script

/u90/app/grid/product/18.3.0/grid/rootupgrade.sh
[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u90/app/grid/product/18.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u90/app/grid/product/18.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/dbisrv04/crsconfig/roothas_2018-10-11_09-21-27AM.log

2018/10/11 09:21:29 CLSRSC-595: Executing upgrade step 1 of 12: 'UpgPrechecks'.
2018/10/11 09:21:30 CLSRSC-363: User ignored prerequisites during installation
2018/10/11 09:21:31 CLSRSC-595: Executing upgrade step 2 of 12: 'GetOldConfig'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 3 of 12: 'GenSiteGUIDs'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 4 of 12: 'SetupOSD'.
2018/10/11 09:21:34 CLSRSC-595: Executing upgrade step 5 of 12: 'PreUpgrade'.

ASM has been upgraded and started successfully.

2018/10/11 09:22:25 CLSRSC-595: Executing upgrade step 6 of 12: 'UpgradeAFD'.
2018/10/11 09:23:52 CLSRSC-595: Executing upgrade step 7 of 12: 'UpgradeOLR'.
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
2018/10/11 09:23:57 CLSRSC-595: Executing upgrade step 8 of 12: 'UpgradeOCR'.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node dbisrv04 successfully pinned.
2018/10/11 09:24:00 CLSRSC-595: Executing upgrade step 9 of 12: 'CreateOHASD'.
2018/10/11 09:24:02 CLSRSC-595: Executing upgrade step 10 of 12: 'ConfigOHASD'.
2018/10/11 09:24:02 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/10/11 09:24:49 CLSRSC-595: Executing upgrade step 11 of 12: 'UpgradeSIHA'.
CRS-4123: Oracle High Availability Services has been started.


dbisrv04     2018/10/11 09:25:58     /u90/app/grid/product/18.3.0/grid/cdata/dbisrv04/backup_20181011_092558.olr     70732493   

dbisrv04     2018/07/31 15:24:14     /u91/app/grid/product/12.2.0/grid/cdata/dbisrv04/backup_20180731_152414.olr     0
2018/10/11 09:25:59 CLSRSC-595: Executing upgrade step 12 of 12: 'InstallACFS'.
CRS-4123: Oracle High Availability Services has been started.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbisrv04'
CRS-2673: Attempting to stop 'ora.driver.afd' on 'dbisrv04'
CRS-2677: Stop of 'ora.driver.afd' on 'dbisrv04' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dbisrv04' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/10/11 09:27:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

– you can ignore the warning related to the memory resources

ignore_prereq

completed_succesfully

– once finished the installation, verify what has been made

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA3.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.db18c.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
--------------------------------------------------------------------------------
 

Cet article How to migrate Grid Infrastructure from release 12c to release 18c est apparu en premier sur Blog dbi services.

From Oracle to Postgres with the EDB Postgres Migration Portal

Mon, 2018-10-08 10:28

EnterpriseDB is a valuable actor in PostgreSQL’s world. In addition to provide support, they also deliver very useful tools to manage easily your Postgres environments. Among these we can mention EDB Enterprise Manager, EDB Backup & Recovery Tool, EDB Failover Manager, aso…
With this post I will present one of the last in the family, EDB Postgres Migration Portal, a helpful tool to migrate from Oracle to Postgres.

To acces to the Portal, use your EDB account or create one if you don’t have. By the way, with your account you can also connect to PostgresRocks, a very interesting community platform. Go take a look :) .

Once connected, click on “Create project” :
1

Fulfill the fields and click on “Create”. Currently it is only possible to migrate from Oracle 11 or 12 to Postgres EDB Advanced Server 10 :
2

All your projects are displayed at the bottom of the page. Click on the “Assess” link to continue :
3

The migration steps consist of the following :

  1. Extracting the DDL metadata from Oracle database using the EDB’s DDL Extractor script
  2. Running assessment
  3. Correcting conflicts
  4. Downloading and running the new DDL statements adapted to your EDB Postgres database
  5. Migrating data
1. Extracting the DDL metadata from Oracle database

The DDL Extractor script is easy to use. You just need to specify the schema name to extract the DDLs and the path to store the DDLs file. As you can guess, the script run the Oracle dbms_metadata.get_dll package to extract objects definitions :
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select object_type, count(*) from dba_objects where owner='HR' and status='VALID' group by object_type order by 1;

OBJECT_TYPE COUNT(*)
----------------------- ----------
INDEX 19
PROCEDURE 2
SEQUENCE 3
TABLE 7
TRIGGER 2

SQL>

SQL> @edb_ddl_extractor.sql
# -- EDB DDL Extractor Version 1.2 for Oracle Database -- #
# ------------------------------------------------------- #
Enter SCHEMA NAME to extract DDLs : HR
Enter PATH to store DDL file : /home/oracle/migration


Writing HR DDLs to /home/oracle/migration_gen_hr_ddls.sql
####################################################################################################################
## DDL EXTRACT FOR EDB POSTGRES MIGRATION PORTAL CREATED ON 03-10-2018 21:41:27 BY DDL EXTRACTION SCRIPT VERSION 1.2
##
## SOURCE DATABASE VERSION: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
####################################################################################################################
Extracting SYNONYMS...
Extracting DATABASE LINKS...
Extracting TYPE/TYPE BODY...
Extracting SEQUENCES...
Extracting TABLEs...
Extracting PARTITION Tables...
Extracting CACHE Tables...
Extracting CLUSTER Tables...
Extracting KEEP Tables...
Extracting INDEX ORGANIZED Tables...
Extracting COMPRESSED Tables...
Extracting NESTED Tables...
Extracting EXTERNAL Tables..
Extracting INDEXES...
Extracting CONSTRAINTS...
Extracting VIEWs..
Extracting MATERIALIZED VIEWs...
Extracting TRIGGERs..
Extracting FUNCTIONS...
Extracting PROCEDURE...
Extracting PACKAGE/PACKAGE BODY...


DDLs for Schema HR have been stored in /home/oracle/migration_gen_hr_ddls.sql
Upload this file to the EDB Migration Portal to assess this schema for EDB Advanced Server Compatibility.


Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@vmrefdba01:/home/oracle/migration/ [DB1]

2. Assessment

Go back to your browser. It’s time to check if the Oracle schema can be imported to Postgres or not. Upload the output file…
4…and click on “Run assessment” to start the check.
The result is presented as follow :
6

3. Correcting conflicts

We can notice an issue in the report above… the bfile type is not supported by EDB PPAS. You can click on the concerned table to get more details about the issue :7Tips : when you want to manage bfile columns in Postgres, you can use the external_file extension.
Of course several other conversion issues can happen. A very good point with the Portal is that it provide a knowledge base to solve conflicts. You will find all necessary information and workarounds by navigating to the “Repair handler” and “Knowledge base” tabs. Moreover, you can do the corrections directly from the Portal.

4. Creating the objects in Postgres database

Once you have corrected the conflicts and the assess report indicates a 100% success ratio, click on the top right “Export DLL” button to download the new creation script adapted for Postgres EDB :
8
Then connect to your Postgres instance and run the script :
postgres=# \i Demo_HR.sql
CREATE SCHEMA
SET
CREATE SEQUENCE
CREATE SEQUENCE
CREATE SEQUENCE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
CREATE PROCEDURE
CREATE PROCEDURE
CREATE TRIGGER
CREATE TRIGGER
postgres=#

Quick check :
postgres=# select object_type, count(*) from dba_objects where schema_name='HR' and status='VALID' group by object_type order by 1;
object_type | count
-------------+-------
INDEX | 19
PROCEDURE | 2
SEQUENCE | 3
TABLE | 7
TRIGGER | 2
(5 rows)

Sounds good ! All objects have been created successfully.

5. Migrating data

The Migration Portal doesn’t provide an embedded solution to import the data. So to do that you can use the EDB Migration Tool Kit.
Let see how it works…
You will find MTK in the edbmtk directory of the {PPAS_HOME}. Inside etc the toolkit.properties file is used to store the connection parameters to the source & target database :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb] cat toolkit.properties
SRC_DB_URL=jdbc:oracle:thin:@192.168.22.101:1521:DB1
SRC_DB_USER=system
SRC_DB_PASSWORD=manager

TARGET_DB_URL=jdbc:edb://localhost:5444/postgres
TARGET_DB_USER=postgres
TARGET_DB_PASSWORD=admin123
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb]

MTK use JDBC to connect to the Oracle database. You need to download the Oracle JDBC driver (ojdbc7.jar) and to store it in the following location :
postgres@ppas01:/home/postgres/ [PG10edb] ll /etc/alternatives/jre/lib/ext/
total 11424
-rw-r--r--. 1 root root 4003800 Oct 20 2017 cldrdata.jar
-rw-r--r--. 1 root root 9445 Oct 20 2017 dnsns.jar
-rw-r--r--. 1 root root 48733 Oct 20 2017 jaccess.jar
-rw-r--r--. 1 root root 1204766 Oct 20 2017 localedata.jar
-rw-r--r--. 1 root root 617 Oct 20 2017 meta-index
-rw-r--r--. 1 root root 2032243 Oct 20 2017 nashorn.jar
-rw-r--r--. 1 root root 3699265 Jun 17 2016 ojdbc7.jar
-rw-r--r--. 1 root root 30711 Oct 20 2017 sunec.jar
-rw-r--r--. 1 root root 293981 Oct 20 2017 sunjce_provider.jar
-rw-r--r--. 1 root root 267326 Oct 20 2017 sunpkcs11.jar
-rw-r--r--. 1 root root 77962 Oct 20 2017 zipfs.jar
postgres@ppas01:/home/postgres/ [PG10edb]

As HR’s objects already exist, let’s start the data migration with the -dataOnly option :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb] ./runMTK.sh -dataOnly -truncLoad -logBadSQL HR
Running EnterpriseDB Migration Toolkit (Build 51.0.1) ...
Source database connectivity info...
conn =jdbc:oracle:thin:@192.168.22.101:1521:DB1
user =system
password=******
Target database connectivity info...
conn =jdbc:edb://localhost:5444/postgres
user =postgres
password=******
Connecting with source Oracle database server...
Connected to Oracle, version 'Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options'
Connecting with target EDB Postgres database server...
Connected to EnterpriseDB, version '10.5.12'
Importing redwood schema HR...
Loading Table Data in 8 MB batches...
Disabling FK constraints & triggers on hr.countries before truncate...
Truncating table COUNTRIES before data load...
Disabling indexes on hr.countries before data load...
Loading Table: COUNTRIES ...
[COUNTRIES] Migrated 25 rows.
[COUNTRIES] Table Data Load Summary: Total Time(s): 0.054 Total Rows: 25
Disabling FK constraints & triggers on hr.departments before truncate...
Truncating table DEPARTMENTS before data load...
Disabling indexes on hr.departments before data load...
Loading Table: DEPARTMENTS ...
[DEPARTMENTS] Migrated 27 rows.
[DEPARTMENTS] Table Data Load Summary: Total Time(s): 0.046 Total Rows: 27
Disabling FK constraints & triggers on hr.employees before truncate...
Truncating table EMPLOYEES before data load...
Disabling indexes on hr.employees before data load...
Loading Table: EMPLOYEES ...
[EMPLOYEES] Migrated 107 rows.
[EMPLOYEES] Table Data Load Summary: Total Time(s): 0.168 Total Rows: 107 Total Size(MB): 0.0087890625
Disabling FK constraints & triggers on hr.jobs before truncate...
Truncating table JOBS before data load...
Disabling indexes on hr.jobs before data load...
Loading Table: JOBS ...
[JOBS] Migrated 19 rows.
[JOBS] Table Data Load Summary: Total Time(s): 0.01 Total Rows: 19
Disabling FK constraints & triggers on hr.job_history before truncate...
Truncating table JOB_HISTORY before data load...
Disabling indexes on hr.job_history before data load...
Loading Table: JOB_HISTORY ...
[JOB_HISTORY] Migrated 10 rows.
[JOB_HISTORY] Table Data Load Summary: Total Time(s): 0.035 Total Rows: 10
Disabling FK constraints & triggers on hr.locations before truncate...
Truncating table LOCATIONS before data load...
Disabling indexes on hr.locations before data load...
Loading Table: LOCATIONS ...
[LOCATIONS] Migrated 23 rows.
[LOCATIONS] Table Data Load Summary: Total Time(s): 0.053 Total Rows: 23 Total Size(MB): 9.765625E-4
Disabling FK constraints & triggers on hr.regions before truncate...
Truncating table REGIONS before data load...
Disabling indexes on hr.regions before data load...
Loading Table: REGIONS ...
[REGIONS] Migrated 4 rows.
[REGIONS] Table Data Load Summary: Total Time(s): 0.025 Total Rows: 4
Enabling FK constraints & triggers on hr.countries...
Enabling indexes on hr.countries after data load...
Enabling FK constraints & triggers on hr.departments...
Enabling indexes on hr.departments after data load...
Enabling FK constraints & triggers on hr.employees...
Enabling indexes on hr.employees after data load...
Enabling FK constraints & triggers on hr.jobs...
Enabling indexes on hr.jobs after data load...
Enabling FK constraints & triggers on hr.job_history...
Enabling indexes on hr.job_history after data load...
Enabling FK constraints & triggers on hr.locations...
Enabling indexes on hr.locations after data load...
Enabling FK constraints & triggers on hr.regions...
Enabling indexes on hr.regions after data load...
Data Load Summary: Total Time (sec): 0.785 Total Rows: 215 Total Size(MB): 0.01

Schema HR imported successfully.
Migration process completed successfully.

Migration logs have been saved to /home/postgres/.enterprisedb/migration-toolkit/logs

******************** Migration Summary ********************
Tables: 7 out of 7

Total objects: 7
Successful count: 7
Failed count: 0
Invalid count: 0

*************************************************************
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb]

Quick check :
postgres=# select * from hr.regions;
region_id | region_name
-----------+------------------------
1 | Europe
2 | Americas
3 | Asia
4 | Middle East and Africa
(4 rows)

Conclusion

Easy, isn’t it ?
Once again, EnterpriseDB is providing a very practical, user-frendly and quick to handle tool. In my demo the HR schema is pretty simple. The migration of more complexe schema can be more challenging. Currently only migrations from Oracle are available but SQL Server and other legacy databases should be supported in future versions. In the meantime, you must use EDB Migration Tool Kit for that.

That’s it. Have fun and… be ready to say goodbye to Oracle :-)

 

Cet article From Oracle to Postgres with the EDB Postgres Migration Portal est apparu en premier sur Blog dbi services.

Deploy a MySQL Server in Docker containers

Mon, 2018-10-08 06:26

We hear about Docker every day. By working on MySQL Server, I am curious to test this platform which makes it possible to create containers independent of the OS to deploy virtualized applications.
So let’s try to deploy a MySQL Server with Docker!


Here is the architecture we will put in place:
MySQL on Docker
So we will run a Docker container for MySQL within a VM.

I’m working on a CentOS 7 installed on a VirtualBox Machine:

[root@node4 ~]# cat /etc/*release*
CentOS Linux release 7.5.1804 (Core)
Derived from Red Hat Enterprise Linux 7.5 (Source)

I install Docker on my VM and enable the Docker service:

[root@node4 ~]# yum install docker
[root@node4 ~]# systemctl enable docker.service

I start the Docker service:

[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: http://docs.docker.com
[root@node4 ~]# systemctl stop docker.service
[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: http://docs.docker.com
[root@node4 ~]# systemctl start docker.service
[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-10-05 16:42:33 CEST; 2s ago
     Docs: http://docs.docker.com
 Main PID: 1514 (dockerd-current)
   CGroup: /system.slice/docker.service
           ├─1514 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt nati...
           └─1518 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2...
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.561072533+02:00" level=warning msg="Docker could not enable SELinux on the...t system"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.597927636+02:00" level=info msg="Graph migration to content-addressability... seconds"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.598407196+02:00" level=info msg="Loading containers: start."
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.642465451+02:00" level=info msg="Firewalld running: false"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.710685631+02:00" level=info msg="Default bridge (docker0) is assigned with... address"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.762876995+02:00" level=info msg="Loading containers: done."
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.780275247+02:00" level=info msg="Daemon has completed initialization"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.780294728+02:00" level=info msg="Docker daemon" commit="8633870/1.13.1" gr...on=1.13.1
Oct 05 16:42:33 node4 systemd[1]: Started Docker Application Container Engine.
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.799371435+02:00" level=info msg="API listen on /var/run/docker.sock"
Hint: Some lines were ellipsized, use -l to show in full.

I check my network:

[root@node4 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:f3:9e:fa brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 85959sec preferred_lft 85959sec
    inet6 fe80::a00:27ff:fef3:9efa/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:45:62:a7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.204/24 brd 192.168.56.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe45:62a7/64 scope link
       valid_lft forever preferred_lft forever
4: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:b0:bf:02:d6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
[root@node4 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b32241ce8931        bridge              bridge              local
9dd4a24a4e61        host                host                local
f1490ec17c17        none                null                local

So I have a network bridge named docker0 to which an IP address is assigned.

To obtain some information about the system, I can run the following command:

[root@node4 ~]# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: false
 Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version:  (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: 5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
 seccomp
  WARNING: You're not using the default seccomp profile
  Profile: /etc/docker/seccomp.json
Kernel Version: 3.10.0-862.3.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 1
Total Memory: 867.7 MiB
Name: node4
ID: 6FFJ:Z33K:PYG3:2N4B:MZDO:7OUF:R6HW:ES3D:H7EK:MFLA:CAJ3:GF67
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)

For the moment I have no containers:

[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Now I can search the Docker Hub for MySQL images, and I pull the first one in my example (I normally chose an official build with the biggest number of stars):

[root@node4 ~]# docker search --filter "is-official=true" --filter "stars=3" mysql
INDEX       NAME                DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
docker.io   docker.io/mysql     MySQL is a widely used, open-source relati...   7075      [OK]
docker.io   docker.io/mariadb   MariaDB is a community-developed fork of M...   2267      [OK]
docker.io   docker.io/percona   Percona Server is a fork of the MySQL rela...   376       [OK]
[root@node4 ~]# docker pull docker.io/mysql
Using default tag: latest
Trying to pull repository docker.io/library/mysql ...
latest: Pulling from docker.io/library/mysql
802b00ed6f79: Pull complete
30f19a05b898: Pull complete
3e43303be5e9: Pull complete
94b281824ae2: Pull complete
51eb397095b1: Pull complete
54567da6fdf0: Pull complete
bc57ddb85cce: Pull complete
d6cd3c7302aa: Pull complete
d8263dad8dbb: Pull complete
780f2f86056d: Pull complete
8e0761cb58cd: Pull complete
7588cfc269e5: Pull complete
Digest: sha256:038f5f6ea8c8f63cfce1bce9c057ab3691cad867e18da8ad4ba6c90874d0537a
Status: Downloaded newer image for docker.io/mysql:latest

I create my container for MySQL named mysqld1:

[root@node4 ~]# docker run -d --name mysqld1 docker.io/mysql
b058fba64c7e585caddfc75f5d96076edb3e80b31773f135d9e44a3487724914

But if I list it, I see that I have a problem, it has exited with an error:

[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES
b058fba64c7e        docker.io/mysql     "docker-entrypoint..."   55 seconds ago      Exited (1) 54 seconds ago                       mysqld1
[root@node4 ~]# docker logs mysqld1
error: database is uninitialized and password option is not specified
  You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD

That means that I forgot password assignment for the ‘root’ user account of MySQL Server. So I stop and the remove the container, and create it again with some additional options:

[root@node4 ~]# docker stop b058fba64c7e
b058fba64c7e
[root@node4 ~]# docker rm b058fba64c7e
b058fba64c7e
[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node4 ~]# docker run --name mysqld1 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=manager -d docker.io/mysql
46a2020f58740d5a87288073ab6292447fe600f961428307d2e2727454655504

Now my container is up and running:

[root@node4 ~]#  docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
46a2020f5874        docker.io/mysql     "docker-entrypoint..."   5 seconds ago       Up 5 seconds        0.0.0.0:3306->3306/tcp, 33060/tcp   mysqld1

I can execute a bash shell on the container in an interactive mode to open a session on it:

[root@node4 ~]# docker exec -it mysqld1 bash
root@46a2020f5874:/#

And try to connect to MySQL Server:

root@46a2020f5874:/# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.12 MySQL Community Server - GPL
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.02 sec)

Great news, everything works well! In a few minutes I have a MySQL Server on his latest version up and running in a Docker container.

 

Cet article Deploy a MySQL Server in Docker containers est apparu en premier sur Blog dbi services.

Walking through the Zürich ZOUG Event – September the 18th

Fri, 2018-10-05 09:38

What a nice and an interesting new experience… My first ZOUG Event… Interesting opportunities to meet some great persons and hear to some great sessions. I had the chance to participate to Markus Michalewicz sessions. Markus is Senior Director, Database HA and Scalability Product Management by Oracle, and was the special guest to this event.

https://soug.ch/events/soug-day-september-2018/

The introduction session was done by Markus. He covered a presentation of the different HA solutions in order to talk about MAA. Oracle Maximum Availability Architecture (MAA) is, from my understanding, more a service delivered by Oracle in order to help customer to find their best solution at the lowest cost and complexity according to their constraint.

I was really looking to hear the next session from Robert Bialek from Trivadis about oracle database service high availability with Data Guard. Bialek covered a nice presentation of Data Guard, how it is working and providing some good tips in the way it should be configured.

The best session was certainly the next one, done by my colleague, Clemens Bleile, Oracle Technology Leader at dbi. What a great sharing experience from his past years as one of the managers in the Oracle Support Performance team EMEA. Clemens talked about SQLTXPLAIN, performance troubleshooting tool, its history and the future. Clemens also presented SQLT tool.

The last session I followed was chaired by Markus. The subject was autonomous database, and all the automatic features which came along the last Oracle releases. Will this make Databases been able to be managed themselves? The future will let us know. :-)

Thanks to dbi management to have given me the opportunity to join this Zoug event!

 

Cet article Walking through the Zürich ZOUG Event – September the 18th est apparu en premier sur Blog dbi services.

First steps into SQL Server 2019 availability groups on K8s

Thu, 2018-10-04 01:27

A couple of weeks ago, Microsoft announced the first public CTP version of next SQL Server version (CTP2). It is not a surprise, the SQL Server vNext becomes SQL Server 2019 and there are a plenty of enhancements as well as new features to discover. But for now, let’s start with likely one of my favorites: availability groups on Kurbernetes (aka K8s). As far I may see from customers and hear from my colleagues as well, we assist to a strong adoption of K8s with OpenShift as a main driver. I would not be surprised to see some SQL Server pods at customer shops in a near future, especially with the support of availability groups on K8s. From my opinion, that is definitely something that was missing in the previous for microservices architectures or not, for either quality or production environments.

blog 143 - 0 - AG K8s

Well, I decided to learn more about this new feature but let’s say this write-up concerns the CTP 2.0 version and chances are things will likely change in the future. So, don’t focus strictly on my words or commands I’m using in this blog post.

It is some time since I used the Service Azure Kubernetes (AKS) and I already wrote about it in a previous blog post. I used the same environment to deploy my first availability group on K8s. It was definitely an interesting experience because it involved getting technical skills about K8s infrastructure.

So, let’s set briefly the context with my K8s cluster on Azure that is composed of 3 agent nodes as shown below:

$ kubectl get nodes -o wide
NAME                       STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-78763348-0   Ready     agent     126d      v1.9.6    <none>        Ubuntu 16.04.4 LTS   4.13.0-1016-azure   docker://1.13.1
aks-nodepool1-78763348-1   Ready     agent     126d      v1.9.6    <none>        Ubuntu 16.04.4 LTS   4.13.0-1016-azure   docker://1.13.1
aks-nodepool1-78763348-2   Ready     agent     35d       v1.9.6    <none>        Ubuntu 16.04.5 LTS   4.15.0-1023-azure   docker://1.13.1

 

I also used a custom namespace – agdev – to scope my availability group resources names.

$ kubectl get ns
NAME           STATUS        AGE
ag1            Terminating   23h
agdev          Active        10h
azure-system   Active        124d
default        Active        124d
kube-public    Active        124d
kube-system    Active        124d

 

Referring to the Microsoft documentation, the SQL secrets (including master key and SA password secrets) are ready for use:

$ kubectl get secret sql-secrets -n agdev
NAME                   TYPE                                  DATA      AGE
sql-secrets            Opaque                                2         1d

$ kubectl describe secret sql-secrets -n agdev
Name:         sql-secrets
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
masterkeypassword:  14 bytes
sapassword:         14 bytes

 

  • The operator

The first component to deploy is the operator which is a very important component in this infrastructure and that builds upon the basic Kubernetes resource and controller concepts. Kubernetes has a very pluggable way to add your own logic in the form of a controller in addition of existing built-in controllers as the old fashion replication controller, the replica sets and deployments. All of them are suitable for stateless applications but the story is not the same when we have to deal with stateful systems like databases because those system require specific application domain knowledge to correctly scale, upgrade and reconfigure while protecting against data loss or unavailability. For example, how to deal correctly with availability groups during a crash of pod? If we think about it, the work doesn’t consist only in restarting the crashing pod but the system will also have to execute custom tasks in a background including electing of a new primary (aka leader election), ensuring a safe transition during the failover period to avoid split brain scenarios etc.

Deploying the mssql-operator includes the creation of a new pod:

$ kubectl get pods -n agdev -l app=mssql-operator
NAME                              READY     STATUS    RESTARTS   AGE
mssql-operator-67447c4bd8-s6tbv   1/1       Running   0          11h

 

Let’s go further by getting more details about this pod:

$ kubectl describe pod -n agdev mssql-operator-67447c4bd8-s6tbv
Name:           mssql-operator-67447c4bd8-s6tbv
Namespace:      agdev
Node:           aks-nodepool1-78763348-0/10.240.0.4
Start Time:     Mon, 01 Oct 2018 08:12:47 +0200
Labels:         app=mssql-operator
                pod-template-hash=2300370684
Annotations:    <none>
Status:         Running
IP:             10.244.1.56
Controlled By:  ReplicaSet/mssql-operator-67447c4bd8
Containers:
  mssql-operator:
    Container ID:  docker://148ba4b8ccd91159fecc3087dd4c0b7eb7feb36be4b3b5124314121531cd3a3c
    Image:         mcr.microsoft.com/mssql/ha:vNext-CTP2.0-ubuntu
    Image ID:      docker-pullable://mcr.microsoft.com/mssql/ha@sha256:c5d20c8b34ea096a845de0222441304a14ad31a447d79904bafaf29f898704d0
    Port:          <none>
    Host Port:     <none>
    Command:
      /mssql-server-k8s-operator
    State:          Running
      Started:      Mon, 01 Oct 2018 08:13:32 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      MSSQL_K8S_NAMESPACE:  agdev (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from mssql-operator-token-bd5gc (ro)
…
Volumes:
  mssql-operator-token-bd5gc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mssql-operator-token-bd5gc
    Optional:    false

 

Some interesting items to note here:

  • The SQL Server CTP image – mcr.microsoft.com/mssql/ha – comes from the new Microsoft Container Registry (MCR). The current tag is vNext-CTP2.0-ubuntu at the moment of this write-up
  • Volume secret is mounted to pass sensitive data that concerns a K8s service account used by the pod. In fact, the deployment of availability groups implies the creation of multiple service accounts
$ kubectl describe secret -n agdev mssql-operator-token-bd5gc
Name:         mssql-operator-token-bd5gc
Namespace:    agdev
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=mssql-operator
              kubernetes.io/service-account.uid=03cb111e-c541-11e8-a34a-0a09b8f01b34

Type:  kubernetes.io/service-account-token

Data
====
namespace:  5 bytes
token:      xxxx
ca.crt:     1720 bytes

 

The command is /mssql-server-k8s-operator that is a binary file like other mssql-server* files packaged in the new SQL Server image and which are designed to respond to different events by appropriated actions like updating K8s resources:

$ kubectl exec -ti -n agdev mssql-operator-67447c4bd8-s6tbv -- /bin/bash
root@mssql-operator-67447c4bd8-s6tbv:/# ll mssql*
-rwxrwxr-x 1 root root 32277998 Sep 19 16:00 mssql-server-k8s-ag-agent*
-rwxrwxr-x 1 root root 31848041 Sep 19 16:00 mssql-server-k8s-ag-agent-supervisor*
-rwxrwxr-x 1 root root 31336739 Sep 19 16:00 mssql-server-k8s-failover*
-rwxrwxr-x 1 root root 32203064 Sep 19 16:00 mssql-server-k8s-health-agent*
-rwxrwxr-x 1 root root 31683946 Sep 19 16:00 mssql-server-k8s-init-sql*
-rwxrwxr-x 1 root root 31422517 Sep 19 16:00 mssql-server-k8s-operator*
-rwxrwxr-x 1 root root 31645032 Sep 19 16:00 mssql-server-k8s-rotate-creds*

root@mssql-operator-67447c4bd8-s6tbv:/# file mssql-server-k8s-operator
mssql-server-k8s-operator: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, not stripped

 

  • The SQL Server instances and AGs

The next step consisted in running the SQL Server AG deployment. Looking at the manifest file, we may notice we deploy custom SQL Server objects (kind: SqlServer) from new mssql.microsoft.com API installed previously as well as their corresponding services to expose SQL Server pods to the external traffic.

The deployment includes 3 StatefulSets that manage pods with 2 containers, respectively the SQL Server engine and its agent (HA supervisor). I was surprised to not see a deployment with kind: StatefulSet but I got the confirmation that the “logic” is encapsulated in the SqlServer object definition. Why StatfulSets here? Well, because they are more valuable for applications like databases by providing, inter alia, stable and unique network identifiers as well as stable and persistent storage. Stateless pods do not provide such capabilities. To meet StafulSet prerequisites, we need first to define persistent volumes for each SQL Server pod. Recent version of K8s allows to use dynamic provisioning and this is exactly what is used in the initial Microsoft deployment file with the instanceRootVolumeClaimTemplate:

instanceRootVolumeClaimTemplate:
   accessModes: [ReadWriteOnce]
   resources:
     requests: {storage: 5Gi}
   storageClass: default

 

However, in my context I already created persistent volumes for previous tests as shown below:

$ kubectl get pv -n agdev
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                STORAGECLASS   REASON    AGE
pvc-cb299d79-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            Delete           Bound     agdev/mssql-data-1   azure-disk               9h
pvc-cb4915b4-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            Delete           Bound     agdev/mssql-data-2   azure-disk               9h
pvc-cb67cd06-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            Delete           Bound     agdev/mssql-data-3   azure-disk               9h

$ kubectl get pvc -n agdev
NAME           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mssql-data-1   Bound     pvc-cb299d79-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            azure-disk     9h
mssql-data-2   Bound     pvc-cb4915b4-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            azure-disk     9h
mssql-data-3   Bound     pvc-cb67cd06-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            azure-disk     9h

 

So, I changed a little bit the initial manifest file for each SqlServer object with my existing persistent claims:

instanceRootVolume:
    persistentVolumeClaim:
      claimName: mssql-data-1

instanceRootVolume:
    persistentVolumeClaim:
      claimName: mssql-data-2

instanceRootVolume:
    persistentVolumeClaim:
      claimName: mssql-data-3

 

Furthermore, next prerequisite for StatefulSet consists in using a headless service and this exactly we may find with the creation of ag1 service during the deployment:

$ kubectl get svc -n agdev
NAME          TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)             AGE
ag1           ClusterIP      None           <none>          1433/TCP,5022/TCP   1d

 

I also noticed some other interesting items like extra pods in completed state:

$ kubectl get pods -n agdev -l app!=mssql-operator
NAME                            READY     STATUS      RESTARTS   AGE
mssql-initialize-mssql1-plh8l   0/1       Completed   0          9h
mssql-initialize-mssql2-l6z8m   0/1       Completed   0          9h
mssql-initialize-mssql3-wrbkl   0/1       Completed   0          9h
mssql1-0                        2/2       Running     0          9h
mssql2-0                        2/2       Running     0          9h
mssql3-0                        2/2       Running     0          9h

$ kubectl get sts -n agdev
NAME      DESIRED   CURRENT   AGE
mssql1    1         1         9h
mssql2    1         1         9h
mssql3    1         1         9h

 

In fact, those pods are related to jobs created and executed in a background during the deployment of the SQL Server AG:

$ kubectl get jobs -n agdev
NAME                      DESIRED   SUCCESSFUL   AGE
mssql-initialize-mssql1   1         1            22h
mssql-initialize-mssql2   1         1            22h
mssql-initialize-mssql3   1         1            22h

 

Let’s take a look at the mssql-initialize-mssql1 job:

$ kubectl describe job -n agdev mssql-initialize-mssql1
Name:           mssql-initialize-mssql1
Namespace:      agdev
Selector:       controller-uid=cd481f3c-c5b5-11e8-a34a-0a09b8f01b34
Labels:         controller-uid=cd481f3c-c5b5-11e8-a34a-0a09b8f01b34
                job-name=mssql-initialize-mssql1
Annotations:    <none>
Parallelism:    1
Completions:    1
Start Time:     Mon, 01 Oct 2018 22:08:45 +0200
Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
Pod Template:
  Labels:           controller-uid=cd481f3c-c5b5-11e8-a34a-0a09b8f01b34
                    job-name=mssql-initialize-mssql1
  Service Account:  mssql-initialize-mssql1
  Containers:
   mssql-initialize:
    Image:      mcr.microsoft.com/mssql/ha:vNext-CTP2.0-ubuntu
    Port:       <none>
    Host Port:  <none>
    Command:
      /mssql-server-k8s-init-sql
    Environment:
      MSSQL_K8S_NAMESPACE:              (v1:metadata.namespace)
      MSSQL_K8S_SA_PASSWORD:           <set to the key 'sapassword' in secret 'sql-secrets'>  Optional: false
      MSSQL_K8S_NUM_SQL_SERVERS:       1
      MSSQL_K8S_SQL_POD_OWNER_UID:     cd13319a-c5b5-11e8-a34a-0a09b8f01b34
      MSSQL_K8S_SQL_SERVER_NAME:       mssql1
      MSSQL_K8S_SQL_POST_INIT_SCRIPT:
      MSSQL_K8S_MASTER_KEY_PASSWORD:   <set to the key 'masterkeypassword' in secret 'sql-secrets'>  Optional: false
    Mounts:                            <none>
  Volumes:                             <none>
Events:                                <none>

 

These jobs are one-time initialization code that is executed when SQL Server and the AG is bootstrapped (thank you to @MihaelaBlendea to give more details on this topic) through the mssql-server-k8s-init-sql command. This is likely something you may remove according to your context (if you daily deal with a lot of K8s jobs for example).

Then, the deployment led to create 3 StatefulSets with their respective pods mssql1-0, mssql2-0 and mssql3-0. Each pod contains 2 containers as shown below for the mssql1-0 pod:

$ kubectl describe pod -n agdev mssql1-0
Name:           mssql1-0
Namespace:      agdev
Node:           aks-nodepool1-78763348-1/10.240.0.5
…
Status:         Running
IP:             10.244.0.38
Controlled By:  StatefulSet/mssql1
Containers:
  mssql-server:
    Container ID:   docker://8e23cec873ea3d1ebd98f8f4f0ab0b11b840c54c17557d23817b9c21a863bb42
    Image:          mcr.microsoft.com/mssql/server:vNext-CTP2.0-ubuntu
    Image ID:       docker-pullable://mcr.microsoft.com/mssql/server@sha256:87e691e2e5f738fd64a427ebe935e4e5ccd631be1b4f66be1953c7450418c8c8
    Ports:          1433/TCP, 5022/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Mon, 01 Oct 2018 22:11:44 +0200
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=60s timeout=1s period=2s #success=1 #failure=3
    Environment:
      ACCEPT_EULA:        y
      MSSQL_PID:          Developer
      MSSQL_SA_PASSWORD:  <set to the key 'initsapassword' in secret 'mssql1-statefulset-secret'>  Optional: false
      MSSQL_ENABLE_HADR:  1
    Mounts:
      /var/opt/mssql from instance-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from no-api-access (ro)
  mssql-ha-supervisor:
    Container ID:  docker://f5a0d4d51a459752a2c509eb3ec7874d94586a7499201f559c9ad8281751e514
    Image:         mcr.microsoft.com/mssql/ha:vNext-CTP2.0-ubuntu
    Image ID:      docker-pullable://mcr.microsoft.com/mssql/ha@sha256:c5d20c8b34ea096a845de0222441304a14ad31a447d79904bafaf29f898704d0
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /mssql-server-k8s-ag-agent-supervisor
    State:          Running
      Started:      Mon, 01 Oct 2018 22:11:45 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      MSSQL_K8S_NAMESPACE:                         agdev (v1:metadata.namespace)
      MSSQL_K8S_POD_NAME:                          mssql1-0 (v1:metadata.name)
      MSSQL_K8S_SQL_SERVER_NAME:                   mssql1
      MSSQL_K8S_POD_IP:                             (v1:status.podIP)
      MSSQL_K8S_NODE_NAME:                          (v1:spec.nodeName)
      MSSQL_K8S_MONITOR_POLICY:                    3
      MSSQL_K8S_HEALTH_CONNECTION_REBOOT_TIMEOUT:
      MSSQL_K8S_SKIP_AG_ANTI_AFFINITY:
      MSSQL_K8S_MONITOR_PERIOD_SECONDS:
      MSSQL_K8S_LEASE_DURATION_SECONDS:
      MSSQL_K8S_RENEW_DEADLINE_SECONDS:
      MSSQL_K8S_RETRY_PERIOD_SECONDS:
      MSSQL_K8S_ACQUIRE_PERIOD_SECONDS:
      MSSQL_K8S_SQL_WRITE_LEASE_PERIOD_SECONDS:
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from mssql1-token-5zlkq (ro)
….
Volumes:
  no-api-access:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  instance-root:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mssql-data-1
    ReadOnly:   false
  mssql1-token-5zlkq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mssql1-token-5zlkq
    Optional:    false
…

 

We recognize the mssql-server and mssql-ha-supervisor container as stated to the Microsoft documentation. The mssql-server container is listening on the port 1433 (SQL engine) and 5022 (hadr point). Note the container includes a HTTP liveness probes (http-get http://:8080/healthz delay=60s timeout=1s period=2s #success=1 #failure=3) to determine its health. Morever, the mssql-ha-supervisor container is self-explaining and aims to monitor the SQL Server instance if we refer to the environment variable names. I believe another blog post will be necessary to talk about it. Each SQL Server pod (meaning a SQL Server instance here that listen on the port 1433) is exposed to the external traffic by a dedicated service as shown below. External IPs are assigned to the K8s cluster load balancer services through the Azure Load Balancer (basic SKU).

$ kubectl get svc -n agdev
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)             AGE
ag1                    ClusterIP      None           <none>           1433/TCP,5022/TCP   23h
mssql1                 LoadBalancer   10.0.43.216    xx.xx.xx.xxx    1433:31674/TCP      23h
mssql2                 LoadBalancer   10.0.28.27     xx.xx.xx.xxx    1433:32681/TCP      23h
mssql3                 LoadBalancer   10.0.137.244   xx.xx.xxx.xxx    1433:31152/TCP      23h

 

  • The AG Services

Finally, I only deployed the service corresponding to ag1-primary that connects to the primary replica. It is up to you to deploy other ones according to your context. In fact, the ag1-primary service acts as the AG listener in this new infrastructure.

$ kubectl get svc -n agdev
NAME          TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)             AGE
ag1           ClusterIP      None           <none>          1433/TCP,5022/TCP   23h
ag1-primary   LoadBalancer   10.0.32.104    xxx.xx.xx.xxx       1433:31960/TCP      1m
mssql1        LoadBalancer   10.0.43.216    xx.xx.xx.xxx   1433:31674/TCP      23h
mssql2        LoadBalancer   10.0.28.27     xx.xx.xx.xxx   1433:32681/TCP      23h
mssql3        LoadBalancer   10.0.137.244   xx.xx.xxx.xxx   1433:31152/TCP      23h

 

So, it’s time to connect to my availability group from the external IP of the ag1-primary service. I already add a test database to the availability group and here a picture of the situation:

-- groups info
SELECT 
	g.name as ag_name,
	rgs.primary_replica, 
	rgs.primary_recovery_health_desc as recovery_health, 
	rgs.synchronization_health_desc as sync_health
FROM sys.dm_hadr_availability_group_states as rgs
JOIN sys.availability_groups AS g
				 ON rgs.group_id = g.group_id

-- replicas info
SELECT 
	g.name as ag_name,
	r.replica_server_name,
	r.availability_mode_desc as [availability_mode],
	r.failover_mode_desc as [failover_mode],
	rs.is_local,
	rs.role_desc as role,
	rs.operational_state_desc as op_state,
	rs.connected_state_desc as connect_state,
	rs.synchronization_health_desc as sync_state,
	rs.last_connect_error_number,
	rs.last_connect_error_description
FROM sys.dm_hadr_availability_replica_states AS rs
JOIN sys.availability_replicas AS r
	ON rs.replica_id = r.replica_id
JOIN sys.availability_groups AS g
	ON g.group_id = r.group_id
ORDER BY r.replica_server_name, rs.is_local;

-- DB level          
SELECT 
	g.name as ag_name,
	r.replica_server_name,
	DB_NAME(drs.database_id) as [database_name],
	drs.is_local,
	drs.is_primary_replica,
	synchronization_state_desc as sync_state,
	synchronization_health_desc as sync_health,
	database_state_desc as db_state
FROM sys.dm_hadr_database_replica_states AS drs
		 JOIN sys.availability_replicas AS r
		  ON r.replica_id = drs.replica_id
		 JOIN sys.availability_groups AS g
		  ON g.group_id = drs.group_id
ORDER BY g.name, drs.is_primary_replica DESC;
GO

 

blog 143 - 1 - AG config

This is a common picture we may get with traditional availability group. Another way to identify the primary replica is going through the kubectl command pod and to filter by label as follows:

$ kubectl get pods -n agdev -l="role.ag.mssql.microsoft.com/ag1"="primary"
NAME       READY     STATUS    RESTARTS   AGE
mssql1-0   2/2       Running   0          1d

 

To finish, let’s simulate the crash of the pod mssql1-0 and let’s see what happens:

$ kubectl delete pod -n agdev mssql1-0
pod "mssql1-0" deleted
kubectl get pods -n agdev
NAME                              READY     STATUS        RESTARTS   AGE
mssql-initialize-mssql1-plh8l     0/1       Completed     0          1d
mssql-initialize-mssql2-l6z8m     0/1       Completed     0          1d
mssql-initialize-mssql3-wrbkl     0/1       Completed     0          1d
mssql-operator-67447c4bd8-s6tbv   1/1       Running       0          1d
mssql1-0                          0/2       Terminating   0          1d
mssql2-0                          2/2       Running       0          1d
mssql3-0                          2/2       Running       0          1d

...

$ kubectl get pods -n agdev
NAME                              READY     STATUS              RESTARTS   AGE
mssql-initialize-mssql1-plh8l     0/1       Completed           0          1d
mssql-initialize-mssql2-l6z8m     0/1       Completed           0          1d
mssql-initialize-mssql3-wrbkl     0/1       Completed           0          1d
mssql-operator-67447c4bd8-s6tbv   1/1       Running             0          1d
mssql1-0                          0/2       ContainerCreating   0          9s
mssql2-0                          2/2       Running             0          1d
mssql3-0                          2/2       Running             0          1d

...

$ kubectl get pods -n agdev
NAME                              READY     STATUS      RESTARTS   AGE
mssql-initialize-mssql1-plh8l     0/1       Completed   0          1d
mssql-initialize-mssql2-l6z8m     0/1       Completed   0          1d
mssql-initialize-mssql3-wrbkl     0/1       Completed   0          1d
mssql-operator-67447c4bd8-s6tbv   1/1       Running     0          1d
mssql1-0                          2/2       Running     0          2m
mssql2-0                          2/2       Running     0          1d
mssql3-0                          2/2       Running     0          1d

 

As expected, the controller detects the event and recreates accordingly an another mssql1-0 pod but that’s not all. Firstly, let’s say because we are concerned by StatefulSet the pod keeps the same identity. Then the controller performs also other tasks including failover the availability group to another pod and change the primary with the mssql3-0 pod as shown below. The label of this pod is updated to identify the new primary.

$ kubectl get pods -n agdev -l="role.ag.mssql.microsoft.com/ag1"="primary"
NAME       READY     STATUS    RESTARTS   AGE
mssql3-0   2/2       Running   0          1d

 

This blog post was just an overview of what could be a SQL Server availability group on K8s. Obviously, there are a plenty of other interesting items to cover and to deep dive … probably in a near future. Stay tuned!

 

Cet article First steps into SQL Server 2019 availability groups on K8s est apparu en premier sur Blog dbi services.

Foglight: Monitoring solution for databases [Part 01]

Wed, 2018-10-03 01:00
What is Foglight?

Foglight is a solution from Quest which promises to provide visibility into issues affecting the application and end user experience.

The solution also helps you to find quickly the root cause in application, database, infrastructure, or network to resolve issues by providing “Intuitive workflows”

Let’s give it a try!

Preparing the installation

Requirements for installing Foglight are:

  • A machine to host the Management Server. Ideally dedicated
  • Administrator or root access to all machines requiring a Foglight agent
  • An administrator password for Foglight
  • A user account on the machine where you are installing Foglight
  • The IATEMPDIR environment variable is set to a location with sufficient space for installer self-extraction
Architecture

Foglight requires 2 components:

  • a Management Server: the data collection and processing server
  • a database repository: can be a PostgreSQL embedded in the installation process (Standard installation) or a supported external database: MySQL, Oracle, PostgreSQL or Microsoft SQL server (Custom Installation)
    • If you chose the embedded database, it is automatically stopped or started with the Management Server
    • You can start with the embedded database and then migrate to an external one. The procedure is available here
Important considerations
  • For this test I chose to download and install Foglight with the embedded PostgreSQL database
  • I will use the 45 days trial license which is by default activated at the installation. It is possible to install the license during the installation if you perform the custom install
  • I will make a silent installation given the fact that the Foglight installer can be started in command-line mode by using the console mode or silent mode
Installation

After unzipping the downloaded file, we can arrange the installation parameters according to our needs. This is done by editing the installation parameter file (as description of each parameter can be found here):


[foglight@mgt-server Installers]$ egrep -v "^#|^$" fms_silent_install.properties
INSTALLER_UI=SILENT
USER_INSTALL_DIR=/foglight/app
FMS_LICENSE_AGREEMENT=yes
FMS_SERVICE=false
FMS_UPGRADE=1
FMS_ADMIN_PASSWORD=foglight
FMS_HTTPS_ONLY=0
FMS_HA_MODE=0
FMS_DB_USER=foglight
FMS_DB_USER_PASSWORD=foglight
FMS_DB=embedded
FMS_DB_HOST=127.0.0.1
FMS_DB_PORT=15432
FMS_DB_SETUPNOW=1
FMS_RUN_NOW=false
FMS_CLUSTER_MCAST_PORT=45566
FMS_HTTP_PORT=8080
FMS_HTTPS_PORT=8443
FMS_FEDERATION_PORT=1099
FMS_QP5APP_PORT=8448
FMS_SERVICE_LINUX_ENABLED=0
FMS_SERVICE_LINUX_VALID_PLATFORM=false

The we can run the installation in silent as below:


[foglight@mgt-server Installers]$ ./foglight-5.9.2-foglightfordatabaseslinux-x86_64.bin -i silent -f fms_silent_install.properties
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Preparing SILENT Mode Installation...

===============================================================================
Foglight 5.9.2 (created with InstallAnywhere by Macrovision)
-------------------------------------------------------------------------------

===============================================================================
Installing...
-------------

[==================|==================|==================|==================] [------------------|------------------|------------------|------------------]

Installation Complete.

At this stage the installation should have succeed. If it is not the case, have a look on the below log files located in the user home:

[foglight@mgt-server ~]$ ll ~
total 8
-rw-rw-r-- 1 foglight foglight 573 Oct 1 14:47 Foglight_5.9.3_Install_2018-10-01_144723_001.log
-rw-rw-r-- 1 foglight foglight 4026 Oct 1 14:47 Foglight_5.9.3_InstallLog.log

Start, Stop and login Start

Now we can start our installation:

[foglight@mgt-server app]$ fms -d
2018-10-01 15:00:08.000 INFO [native] Attempting to start Foglight as a daemon.
The startup may take some time to complete. Please check the log file for more
information. Use the '--stop' command line option to shut down a running
daemon.
2018-10-01 15:00:08.000 INFO [native] Daemon process for 'Foglight' started.

And the check out what are the running processes. The is 1 process for the management server and various for the postgres database as we are in a embedded installation:

[foglight@mgt-server app]$ ps -ef | grep foglight
foglight 23601 1 74 23:01 pts/0 00:02:22 Foglight 5.9.2: Foglight Daemon
foglight 23669 1 0 23:01 pts/0 00:00:00 /foglight/app/postgresql/bin/postgres -D /foglight/app/state/postgresql-data --port=15432
foglight 23670 23669 0 23:01 ? 00:00:00 postgres: logger process
foglight 23672 23669 0 23:01 ? 00:00:00 postgres: checkpointer process
foglight 23673 23669 0 23:01 ? 00:00:00 postgres: writer process
foglight 23674 23669 0 23:01 ? 00:00:00 postgres: wal writer process
foglight 23675 23669 0 23:01 ? 00:00:00 postgres: autovacuum launcher process
foglight 23676 23669 0 23:01 ? 00:00:00 postgres: stats collector process
foglight 23687 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48463) idle
foglight 23688 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48464) idle
foglight 23689 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48465) idle
foglight 23690 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48466) idle
foglight 23691 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48467) idle
foglight 23692 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48468) idle
foglight 23693 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48469) idle
foglight 23694 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48470) idle
foglight 23695 23669 1 23:02 ? 00:00:03 postgres: foglight foglight 127.0.0.1(48471) idle
foglight 23853 23669 1 23:04 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48474) idle
foglight 23868 23601 47 23:04 pts/0 00:00:11 FoglightAgentManager 5.9.2: FglAM /foglight/app/fglam/state/default on server
foglight 23876 23868 0 23:04 ? 00:00:00 Quest Application Watchdog 5.9.5: Monitoring PID 23868
foglight 23943 1 0 23:04 pts/0 00:00:00 Quest Application Relauncher 5.9.5: /foglight/app/fglam/bin/fglam

Another option is to start the Management Server using initialization scripts. This option is particularly useful when you want to automatically start the Management Server after it has been rebooted:

[root@mgt-server ~]# cp /foglight/app/scripts/init.d/Linux/foglight /etc/init.d/
[root@mgt-server ~]# ll /etc/init.d/foglight
-rwxr-xr-x 1 root root 2084 Oct 1 15:13 /etc/init.d/foglight

Login

Given the parameters provided in the installation files, I can reach the web console here: https://192.168.56.100:8443
My user/password is the default foglight/foglight and here I am:

Capture

Stop

[foglight@mgt-server ~]$ export PATH=$PATH:/foglight/app/bin

[foglight@mgt-server ~]$ fms --stop
2018-10-01 15:15:17.000 INFO [native] Sending stop request to 'Foglight'
process running in /foglight/app/state (pid 12570).
2018-10-01 15:15:17.000 INFO [native] Shutdown request transmitted.

After few seconds you can observe that the PostgreSQL and Management server are down:

[foglight@servemgt-server r ~]$ ps -ef |grep foglight
root 13182 2656 0 15:15 pts/0 00:00:00 su - foglight
foglight 13183 13182 0 15:15 pts/0 00:00:00 -bash
foglight 13251 13183 0 15:15 pts/0 00:00:00 ps -ef
foglight 13252 13183 0 15:15 pts/0 00:00:00 grep --color=auto foglight

I hope this helps and please do not hesitate to contact us for more details.

 

Cet article Foglight: Monitoring solution for databases [Part 01] est apparu en premier sur Blog dbi services.

PDB Snapshot Carousel Oracle 18.3

Tue, 2018-10-02 09:09

A new feature with Oracle 18c is the PDB snapshot carousel. As indicated by its name a PDB snapshot is a copy of a PDB at a specific point in time. You have the possibility to create up to eight snapshots, when you reach the maximum number of snapshots, the last snapshot is over written. The snapshot carousel is obviously the name of all your PDB snapshots.

We have the possibility to create automatic snapshots using the “snapshot mode every” clause when you create or alter a PDB. For example you can change the snapshot mode from a PDB to every  3 hours:

SQL> alter session set container=pdb;

Session altered.

SQL> select snapshot_mode,snapshot_interval/60 from dba_pdbs;

SNAPSH SNAPSHOT_INTERVAL/60
------ --------------------
MANUAL

SQL> alter pluggable database snapshot mode every 3 hours;

Pluggable database altered.

SQL> select snapshot_mode,snapshot_interval/60 from dba_pdbs;

SNAPSH SNAPSHOT_INTERVAL/60
------ --------------------
AUTO			  3

To return to manual mode, just type:

SQL> alter pluggable database snapshot mode manual;

Pluggable database altered.

We can create PDB snapshots manually, you can use a specific name or not:

SQL> alter pluggable database snapshot pdb_snap;

Pluggable database altered.

SQL> alter pluggable database snapshot;

Pluggable database altered.

We can query the dba_pdb_snapshots view to display the PDB snapshots location:

SQL> SELECT CON_ID, CON_NAME, SNAPSHOT_NAME, 
SNAPSHOT_SCN AS snap_scn, FULL_SNAPSHOT_PATH 
FROM   DBA_PDB_SNAPSHOTS ORDER BY SNAP_SCN;

CON_ID CON_NAME SNAPSHOT_NAME SNAP_SCN

FULL_SNAPSHOT_PATH

3        PDB	  PDB_SNAP    1155557
/home/oracle/oradata/DB18/pdb/snap_2263384607_1155557.pdb

3        PDB	  SNAP_2263384607_987432172  1155823
/home/oracle/oradata/DB18/pdb/snap_2263384607_1155823.pdb

If you want to drop a snapshot, you have two methods:

You delete the snapshot with the following alter pluggable statement:

SQL> alter pluggable database drop snapshot SNAP_2263384607_987432172;

Pluggable database altered.

Otherwise you set the MAX_PDB_SNAPSHOTS property to zero in the PDB:

You can query the CDB_PROPERTIES and CDB_PDBS to display the parameter value:

SELECT r.CON_ID, p.PDB_NAME, PROPERTY_NAME,
  	PROPERTY_VALUE AS value, DESCRIPTION
  	FROM   CDB_PROPERTIES r, CDB_PDBS p
  	WHERE  r.CON_ID = p.CON_ID
  	AND    PROPERTY_NAME LIKE 'MAX_PDB%'
  	AND    description like 'maximum%'
  	ORDER BY PROPERTY_NAME

CON_ID	PDB_NAME	PROPERTY_NAME	VALUE	           DESCRIPTION
  3		  PDB     MAX_PDB_SNAPSHOTS    8    maximum number of snapshots for a given PDB

And if you set it to zero all your PDB snapshots will be dropped:

SQL> alter session set container=pdb;

Session altered.

SQL> alter pluggable database set max_pdb_snapshots = 0;

Pluggable database altered.

SQL> SELECT CON_ID, CON_NAME, SNAPSHOT_NAME, 
SNAPSHOT_SCN AS snap_scn, FULL_SNAPSHOT_PATH 
FROM   DBA_PDB_SNAPSHOTS
ORDER BY SNAP_SCN;

no rows selected

But the main interest of the snapshot PDBS is to create new PDBS from a productive environment based on a point in time of the production PDB.

So we create a PDB snapshot named PDB_SNAP:

SQL> alter pluggable database snapshot pdb_snap;

Pluggable database altered.

And now we create a PDB from the PDB_SNAP snapshot:

SQL> create pluggable database PDB2 from PDB using snapshot PDB_SNAP create_file_dest='/home/oracle/oradata/DB18/pdb2';

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB		                  READ WRITE NO
	 4 APPPSI			  READ WRITE NO
	 5 PDB2 			  READ WRITE NO

We have also the possibility to change the snapshot mode:

SQL> alter session set container=pdb;

Session altered.

SQL> SELECT SNAPSHOT_MODE "S_MODE", SNAPSHOT_INTERVAL/60 "SNAP_INT_HRS" 
     FROM DBA_PDBS;

S_MODE SNAP_INT_HRS
------ ------------
MANUAL


SQL> ALTER PLUGGABLE DATABASE SNAPSHOT MODE EVERY 1 HOURS;

Pluggable database altered.

SQL> SELECT SNAPSHOT_MODE "S_MODE", SNAPSHOT_INTERVAL/60 "SNAP_INT_HRS" 
     FROM DBA_PDBS;

S_MODE SNAP_INT_HRS
------ ------------
AUTO		  1

We have the possibility to create a PDB that creates snapshots every 15 minutes :

SQL> create pluggable database pdb_new from pdb
  2  file_name_convert=('pdb','pdb_new')
  3  snapshot mode every 15 minutes;

Pluggable database created.

There is a pre requisite for configuring automatic PDB snapshots: the CDB must be in local undo mode.

Finally the snapshots are correctly created in my environment every 15 minutes:

oracle@localhost:/home/oracle/oradata/DB183/pdb/ [DB183] ls -lrt snap*
-rw-r--r--. 1 oracle dba 65690276 Oct  1 15:04 snap_3893567541_798493.pdb
-rw-r--r--. 1 oracle dba 65740202 Oct  1 15:19 snap_3893567541_801189.pdb
-rw-r--r--. 1 oracle dba 65823279 Oct  1 15:34 snap_3893567541_803706.pdb

And to verify if it is correct , I had created in my pdb_new environment a location table in my psi schema with two records at 15H20:

SQL> create table psi.location (name varchar2(10));

Table created.

SQL> insert into psi.location values ('London');

1 row created.

SQL> insert into psi.location values('Paris');

1 row created.

SQL> commit;

And we create a new pdb from the snap to verify if the data are correct:

SQL> create pluggable database pdb_psi from pdb_new 
     using snapshot SNAP_45745043_988386045 
     create_file_dest='/home/oracle/oradata/DB183/pdb_psi';

Pluggable database created.

We open pdb_psi and we check:

SQL> alter session set container=pdb_psi;

Session altered.

SQL> select * from psi.location;

NAME
----------
London
Paris

This feature might be very useful for testing purposes, imagine you have a production PDB, you only have to create a refreshable clone named PDB_MASTER and configure it to create daily snapshots. If you need a PDB for testing you only have to create a clone from any snapshot.

Conclusion

All those tests have been realized on an Linux x86-64 server, with Oracle 18.3 Enterprise Edition. My DB183 database has been initialized with the “_exadata_feature_on”  hidden parameter to avoid the “ORA-12754 Feature PDB Snapshot Carousel is disabled due to missing capability” error message.

If you have a look at the Database Licensing User Manual:

https://docs.oracle.com/en/database/oracle/oracle-database/18/dblic/Licensing-Information.html#GUID-B6113390-9586-46D7-9008-DCC9EDA45AB4

Feature / Option / Pack SE2 EE EE-ES DBCS SE DBCS EE DBCS EE-HP DBCS EE-EP ExaCS Notes PDB Snapshot Carousel N N Y N Y Y Y Y

 

You will see that PDB Carousel (and a lot of interesting new features in Oracle 18.3) are only available for Engineered System or in Cloud and not for Enterprise Edition for third party hardware. I really hope Oracle will change this behavior in the future releases.

 

Cet article PDB Snapshot Carousel Oracle 18.3 est apparu en premier sur Blog dbi services.

Pages