Feed aggregator

Force logging

Tom Kyte - Tue, 2018-07-17 13:06
Hi Tom, As I have my dc and dr database are working fine and both are in sync. Today I got the force logging mode is no logging So we have planned to make it force logging enable. For that 1.how can we do on primary 2. It is required any downt...
Categories: DBA Blogs

Restarting a failed transportable tablespace metadata import

Yann Neuhaus - Tue, 2018-07-17 09:39

I’m currently working in a project to migrate a Datawarehouse-database from Solaris to Linux Intel (Endian-change). We do use the cross platform incremental backups method as described in My Oracle Support Note 1389592.1 for that.
I.e. incremental backups are applied to database files and during the migration the recovered datafiles are attached to the target database via the transportable tablespace method. When testing the transportable tablespace metadata import I got an error for a table:


Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"DWHT"."TDWHTAB" failed to create with error:
ORA-01843: not a valid month
Failing sql is:
CREATE TABLE "DWHT"."TDWHTAB" (...,"PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE,...

I.e. metadata of all tables in the tablespace were successfully imported except one. The reason for the error was a default-defintion of column “PROCESSING_TM” without a TO_DATE-casting. I.e. it relied on the NLS_DATE_FORMAT-setting. E.g.


SQL> create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE);
create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE)
  *
ERROR at line 1:
ORA-01843: not a valid month
 
SQL> alter session set nls_date_format='dd-mm-yyyy';
 
Session altered.
 
SQL> create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE);
 
Table created.

There are different possibilities on why a metadata import fails for specific tables (see e.g. My Oracle Support Note 1082116.1 for another reason). The question was: How to restart the import without going through the whole process of copying datafiles from the source-database again? As we imported the metadata through a network-link the tablespace on the source database and the tablespace on the target database were still set to “READ ONLY”.

So first of all I fixed the error in the table definition on the source database:


alter table "DWHT"."TDWHTAB" modify ("PROCESSING_TM" DATE DEFAULT TO_DATE('01-01-0001', 'DD-MM-YYYY'));

That was possible even having the associated tablespace of the table on READ ONLY as it is just a metadata update and does not touch the tablespace.

On the target database I cleaned up the tablespace of the failed table metadata import:


drop tablespace DWH_DATA including contents;

It’s of course important to NOT specify “AND DATAFILES” when dropping the tablespace to avoid having to copy all files again.

In my case I actually transported 146 tablespaces and modified the impdp-parameterfile xttplugin.par to contain only the single tablespace, which contained failing tables. Of course you have to drop and include all tablespaces, which have tables with dependencies between each other. In my case that was not an issue as my tables had no dependencies to tables in other tablespaces.

Afterwards I could just restart my metadata-import with the single tablespace:


$ cat xttplugin2.par
directory=DATA_PUMP_DIR
logfile=tts_imp2.log
network_link=ttslink
transport_full_check=no
transport_tablespaces=DWH_DATA
transport_datafiles='/u01/DWH/oradata/DWH_DATA_001.dbf','/u01/DWH/oradata/DWH_DATA_002.dbf'
$ imdp parfile=./xttplugin2.par
Password: / as sysdba

So to restart a failed transportable tablespace metadata import, just fix the root cause, drop the associated tablespace on the target database without dropping the datafiles and restart the import again. That becomes handy especially in cross platform incremental backup migration scenarios.

 

Cet article Restarting a failed transportable tablespace metadata import est apparu en premier sur Blog dbi services.

Oracle Helps Businesses Launch Mobile Marketing Faster and Easier

Oracle Press Releases - Tue, 2018-07-17 07:00
Press Release
Oracle Helps Businesses Launch Mobile Marketing Faster and Easier Industry’s first SMS Public Aggregator Network takes complexity out of setting up mobile marketing

Redwood City, Calif.—Jul 17, 2018

To help marketers quickly and easily source the best operator for maximizing the ROI of their mobile campaigns, Oracle Responsys introduced the SMS Public Aggregator Network (SPAN). Oracle’s SPAN is a powerful new self-service application within Oracle Marketing Cloud that makes it easier for customers to find, order, and manage SMS services directly from the aggregator of their choice.

Mobile consumer engagement has become a central tool in today’s “mobile first” economy, with messaging now a preferred channel for reaching consumers. In fact, Gartner research states “various sources report SMS open and response rates as high as 98 percent and 45 percent, respectively—in contrast to corresponding figures of 20 percent and six percent for email.[i] Integrating SMS into marketing campaigns requires finding an aggregator network that aligns with broader campaign goals. This can often be a laborious, time-consuming and complex process. But SPAN for Oracle Responsys removes this complexity by providing an easy-to-use, self-service application that enables customers to quickly and easily identify aggregator networks, based on region, price or route type.  

“Marketers want to take advantage of mobile channels, but there are often too many barriers in the way. For SMS, one of those challenges is establishing aggregator relationships, which can be a time consuming and tedious task often made worse by limited choice and flexibility,” said Tony Castiglioni, VP of Product Management, Oracle Responsys. “Our new SPAN application eases this process and gives marketers the freedom to choose and use the aggregator best aligned with their marketing needs—whether it be by price, geography, or performance.” 

With the launch of SPAN, all Oracle Responsys customers can now take advantage of SMS to reach and engage customers. The new SMS marketing capabilities build on Oracle Marketing Cloud’s commitment to help marketers convert customers across devices with orchestrated mobile marketing. In addition to the new SMS marketing capabilities, Oracle Responsys equips mobile marketers with industry-leading tools like Push, Rich Push, In-App and MMS messaging to connect with audiences across devices to drive customer engagement, retention, and grow mobile revenue.

Oracle Marketing Cloud is part of Oracle Customer Experience (CX) Cloud Suite, which empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences and outcomes, Oracle CX Cloud Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results.

For more information on the new capabilities in Oracle Marketing Cloud visit www.oracle.com/marketingcloud/index.html or read this blog.

Like this story? Tweet this: @Oracle unveils powerful new SMS marketing capabilities that help Responsys customers quickly and easily launch SMS marketing campaigns https://bit.ly/2NTBL4J

[i] Gartner Research, “Keep SMS in Your Mobile Toolbox,” Charles S. Golvin, April 2018.

Contact Info
Siobhan Lyons
Oracle
1-650-506-3753
siobhan.lyons@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

Talk to a Press Contact

Siobhan Lyons

  • 1-650-506-3753

Hungary’s MKB Bank Goes Fully Digital with Oracle

Oracle Press Releases - Tue, 2018-07-17 07:00
Press Release
Hungary’s MKB Bank Goes Fully Digital with Oracle First all digital bank in Hungary can now implement new and improved services faster

Redwood Shores, Calif.—Jul 17, 2018

MKB Bank, Hungary’s fourth largest commercial bank has today announced it is the first in the country to take a leap towards digital and open banking after transforming its entire technology system. Its transformation has included a full replacement of its core banking system, the retirement and consolidation of roughly 10 additional systems, and the modernization of its entire IT architecture.

MKB implemented Oracle FLEXCUBE, a core banking solution that has been adopted by more than 600 financial institutions around the world, as part of a broader restructuring project to streamline its internal and customer-facing operations and provide customers with more personalized services across multiple channels. MKB’s aim is to reduce its go-to-market time for new services such as corporate and retail lending, as well as digital only services, and to manage customer requests more quickly and easily. 

“It is not every day that a historically conservative medium-sized player commits its resources to modernizing its systems, changing its culture and taking a leap towards digitalization” said Ádám Balog, Chief Executive Officer, MKB. “It is through the reliability of partners like Oracle and the management of our stakeholders that we were able to make this move in such short a time.”

“Banks still have some catching up to do when it comes to serving customers. It’s time to stop resisting change and develop a model based on the way people want to interact with banking services,” said Mark Hetényi, Chief Digital, Retail and Strategy officer, MKB.This starts with a robust digital offering. We replaced our core banking platform with Oracle FLEXCUBE because it supports fully digital customer experiences.”

MKB was intent on fast-tracking its technology program. Despite having comparatively fewer resources than major global banks operating in Hungary, MKB achieved its digital transformation in just two and a half years. The average deployment for projects of this scope typically takes between four and five years.

“MKB recognized that digital services are a key differentiator. As a bank with its eyes firmly planted on the future of customer service, MKB’s bullish implementation of our flexible Oracle FLEXCUBE technology has put it in a position to pull ahead in a competitive Hungarian market,” said Sonny Singh, senior vice president and general manager, Oracle Financial Services.

Contact Info
Judi Palmer
Oracle Corporation
+ 1 650 784 7901
judi.palmer@oracle.com
Michael Ruppert
CMG
+ 44(0) 207 067 0641
mruppert@cmgrp.com
About MKB Bank

MKB Bank is a leading digital bank in Hungary, which has put open banking as one of the pillars of its strategy and believes in partnering with its customers, corporate and private banking clients, as well as the Fintech world through innovations such as its new core system and IT architecture, an innovative MKB Fintechlab ecosystem it has built, as well as its network. The bank is considering an IPO in 2019. 

For more information about MKB Bank, visit mkb.hu.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Judi Palmer

  • + 1 650 784 7901

Michael Ruppert

  • + 44(0) 207 067 0641

Rebuilding Indexes: Danger With Clustering Factor Calculation (Chilly Down)

Richard Foote - Tue, 2018-07-17 01:33
Let me start by saying if you don’t already following Jonathan Lewis’s excellent Oracle blog, do yourself a favour. In a recent article, Jonathan highlighted a danger with rebuilding indexes (or indeed creating an index) when used in relation to collecting index statistics with the TABLE_CACHED_BLOCKS preference. I’ve discussed the importance of the TABLE_CACHED_BLOCKS statistics […]
Categories: DBA Blogs

Oracle Database 18c: Documentation released!

Dietrich Schroff - Mon, 2018-07-16 12:58
Today i surfed to docs.oracle.com and there i found the documentation for oracle database 18c:


In march 2018 i wrote about the new installation procedure of oracle database 18c via rpm, which was announced inside the "new features".

The installation guide for Linux contains now the commands for installing the database via rpm. Here a screencopy of chapter 11:
 rpm -ivh oracle-ee-db-18.1.0.0.0-1.x86_64.rpm
This command creates the Oracle home at the location /opt/oracle/product/18.0.0.0.0-1/dbhome_1.
Cool!

The next step is to run the dbca:
7 The screen displays information that prompts you to create a database by using Oracle Database Configuration Assistant (Oracle DBCA) in silent mode.
[...]
8 Log in as the oracle user and run Oracle DBCA in silent mode to create the database.
[...]
Oracle DBCA creates the Oracle Database listener, the Oracle Database instance, starts the instance, and completes the Oracle Database creation process. Review the status information that is displayed on your screen.

Here the link to docs.oracle.com chapter 11 "Running RPM Packages to Install Oracle Database".

#Exasol Cluster Architecture

The Oracle Instructor - Mon, 2018-07-16 11:57

This article gives a more detailed view on the Exasol Cluster Architecture. A high level view is provided here.

Exasol Cluster Nodes: Hardware

An Exasol Cluster is built with commodity Intel servers without any particular expensive components. SAS hard drives and Ethernet Cards are sufficient. Especially there is no need for an additional storage layer like a SAN.

See here for a list of Exasol Certified Servers.

Disk layout

As a best practice the hard drives of Exasol Cluster nodes are configured as RAID 1 pairs. Each cluster node holds four different areas on disk:

1.OS with 50 GB size containing CentOS Linux, EXAClusterOS and the Exasol database executables

2.Swap with 4 GB size

3.Data with 50 GB size containing Logfiles, Coredumps and BucketFS

4.Storage consuming the remaining capacity for the hard drives for the Data Volumes and Archive Volumes

The first three areas can be stored on dedicated disks in which case these disks are also configured in RAID 1 pairs, usually with a smaller size than those that contain the volumes. More common than having dedicated disks is having servers with only one type of disk. These are configured as hardware RAID 1 pairs. On top of that software RAID 0 partitions are being striped across all disks to contain OS, Swap and Data partition.

Exasol 4+1 Cluster: Software Layers

This popular multi-node cluster serves as example to illustrate the concepts explained. It is called 4+1 cluster because it has 4 Active nodes and 1 Reserve node. Active and Reserve nodes have the same layers of software available. The purpose of the Reserve node is explained here. Upon cluster installation, the License Server copies these layers as tar-balls across the private network to the other nodes. The License Server is the only node in the cluster that boots from disk. Upon cluster startup, it provides the required SW layers to the other cluster nodes.

Exasol License Essentials

There are three types of licenses available:

Database RAM License: This most commonly used model specifies the total amount of RAM that can be assigned to databases in the cluster.

Raw Data License: Specifies the maximum size of the raw data you can store across databases in the cluster.

Memory Data License: Specifies the maximum size of the compressed data you can store across all databases.

For licenses based on RAM, Exasol checks the RAM assignment at the start of the database. If the RAM in use exceeds the maximum RAM specified by the license, the database will not start.

For licenses based on data size (raw data license and memory data license), a periodic check is done by Exasol on the size of the data. If the size limit exceeds the value specified in the license, the database does not permit any further data insertion until the usage drops below the specified value.

Customers receive their license as a separate file. To activate the license, these license files are uploaded to the License Server using EXAoperation.

EXAStorage volumes

Storage Volumes are created with EXAoperation on specified nodes.

EXAStorage provides two kinds of volumes:

Data volumes:

Each database needs one volume for persistent data and one temporary volume for temporary data.

While the temporary volume is automatically created by a database process, the persistent data volume has to be created by an Exasol Administrator upon database creation.

Archive volumes:

Archive volumes are used to store backup files of an Exasol database.

Exasol 4+1 Cluster: Data & Archive Volume distribution

Data Volumes and Archive Volumes are hosted on  the hard drives of the active nodes of a cluster.

They consume the major capacity of these drives. The license server usually hosts EXAoperation.

EXAoperation Essentials

EXAoperation is the major management GUI for Exasol Clusters, consisting of an Application Server and a small Configuration Database, both located on the License Server under normal circumstances. EXAoperation can be accessed from all Cluster Nodes via HTTPS. Should the License Server go down, EXAoperation will failover to another node while the availability of the Exasol database is not affected at all.

Shared-nothing architecture (MPP processing)

Exasol was developed as a parallel system and is constructed according to the shared-nothing principle. Data is distributed across all nodes in a cluster. When responding to queries, all nodes co-operate and special parallel algorithms ensure that most data is processed locally in each individual node’s main memory.

When a query is sent to the system, it is first accepted by the node the client is connected to. The query is then distributed to all nodes. Intelligent algorithms optimize the query, determine the best plan of action and generate needed indexes on the fly. The system then processes the partial results based the local datasets. This processing paradigm is also known as SPMD (single program multiple data). All cluster nodes operate on an equal basis, there is no Master Node. The global query result is delivered back to the user through the original connection.

Above picture shows a Cluster with 4 data nodes and one reserve node. The license server is the only server that boots from disk. It provides the OS used by the other nodes over the network.

Exasol uses a shared nothing architecture. The data stored in this database is symbolized with A,B,C,D to indicate that each node contains a different part of the database data. The active nodes n11-n14 each host database instances that operate on their part of the database locally in an MPP way. These instances communicate and coordinate over the private network.

Exasol Network Essentials

Each Cluster node needs at least two network connections: One for the Public Network and one for the Private Network. The Public Network is used for client connections. 1 Gb Ethernet is sufficient usually. The Private Network is used for the Cluster Interconnect of the nodes. 10 GB Ethernet or higher is recommended for the Private Network. Optionally, the Private Network can be separated into one Database Network (Database Instances communicate over it) and one Storage Network (Mirrored Segments are synchronized over this network).

Exasol Redundancy Essentials

Redundancy is an attribute that can be set upon EXAStorage Volume creation. It specifies the number of copies of the data that is hosted on Active Cluster nodes. In practice this is either Redundancy 1 or Redundancy 2. Redundancy 1 means there is no redundancy, so if one node fails, the volume with that redundancy is no longer available. Typically that is only seen with one-node Clusters. Redundancy 2 means that each node holds a copy of data that is operated on by a neighbor node, so the volume remains available if one node fails.

Exasol 4+1 Cluster: Redundancy 2

If volumes are configured with redundancy 2 – which is a best practice – then each node holds a mirror of data that is operated on by a neighbor node. If e.g. n11 modifies A the mirror A‘ on n12 is synchronized over the private network. Should an active node fail, the reserve node will step in starting an instance.

 

Categories: DBA Blogs

Google Cloud Spanner – no decimal numeric data types

Yann Neuhaus - Mon, 2018-07-16 10:51

Google Cloud Spanner is a distributed relational database focused on scalability without compromising consistency and integrity. It is available only as a managed service in Google Cloud. Basically, the idea is to keep the scalability advantages of NoSQL database (like Bigtable) but adding transactions, relational tables, SQL, structured data,… as in the relational databases we love for decades.
The commercial pitch includes all the NoSQL buzzwords, with the addition of the legacy properties of SQL databases:
Cloud Spanner is a fully managed, mission-critical, relational database service that offers transactional consistency at global scale, schemas, SQL (ANSI 2011 with extensions), and automatic, synchronous replication for high availability.
Here I’m testing something that is not mentioned, but is taken for granted with all SQL databases: the ability to add numbers without erroneous arithmetic results.

It is easy to test on the Google Cloud (which offers 1 year trials) by creating an instance:
CaptureSpanner001CreateInstance

Then create a Spanner database:
CaptureSpanner002CreateDatabase

And create a table:
CaptureSpanner003CreateTable

The table creation can also use the SQL create table statement. Here I’m testing one of the most important features of SQL databases: the numeric datatypes. This is where humans and computers do not speak the same language: Humans have full hands of 10 fingers, where computers deal only with binary digits. Humans numbers are decimal. Computer numbers are binary.

It seems that Google Spanner is binary only. According to the documentation, the only numeric types are:

  • INT64 for signed integers up to 9E18
  • FLOAT64 for floating point IEEE-754

So, there are no decimal datatypes and decimal values will be approximated by binary values. This is ok to store computer numbers, but not human numbers such as prices, salaries,…

In order to show the problem I’ve created a table with FLOAT64:

CREATE TABLE NUMBERS (
ID INT64 NOT NULL,
AMOUNT FLOAT64,
) PRIMARY KEY (ID)

The SQL Query interface do not allow for DML other than SELECT:
DML not supported

So we can use the API or this simple from from the ‘data’ tab:
CaptureSpannerInsert

I’ve added 10 rows with ‘0.1’ which is easy to represent in decimal arithmetic, but not in binary arithmetic. Look at the sum:
CaptureQuery1

This is binary arithmetic applied to decimal numbers: approximation. You can select each rows and see ‘0.1’ but when you sum all the 10 rows together, you get less than 1. That’s probably close enough for some ‘BigData’ usage, accountants will not like it.

If you wonder why it takes 100 milliseconds for this 10 rows table, remember that this is a distributed database across 3 continents. Here is the execution plan:
CapturePlan

So what?

Do not forget that all the new trends for databases, in the ‘micro-services’ era, are focused at specific use-cases. They do not compete with the ‘old’ relational databases which are general purpose and have integrated, version after version, all the different ways to store and process data shared by multiple applications. Those NoSQL and NewSQL can be considered as an alternative only within the scope of what they are designed for. Spanner was desgined for Google internal use in Google AdWords and then provided as a service for similar use. It was developed to solve a specific problem: the lack of transactions in Bigtable.

Note that the Open Source alternative that is close to Google Spanner is CockroachDB which has a DECIMAL datatype to store fixed-point decimal numbers.

 

Cet article Google Cloud Spanner – no decimal numeric data types est apparu en premier sur Blog dbi services.

Direct IOT

Jonathan Lewis - Mon, 2018-07-16 07:02

A recent (automatic ?) tweet from Connor McDonald highlighted an article he’d written a couple of years ago about an enhancement introduced in 12c that allowed for direct path inserts to index organized tables (IOTs). The article included a demonstration seemed to suggest that direct path loads to IOTs were of no benefit, and ended with the comment (which could be applied to any Oracle feature): “Direct mode insert is a very cool facility, but it doesn’t mean that it’s going to be the best option in every situation.”

Clearly it’s necessary to pose the question – “so when would direct mode insert be a good option for IOTs?” – because if it’s never a good option you have to wonder why it has been implemented. This naturally leads on to thinking about which tests have not yet been done – what aspects of IOTs did Connor not get round to examining in his article. (That’s a standard principle of trouble-shooting, or testing, or investigation: when someone shows you a test case (or when you think you’ve finished testing) one thing you should do before taking the results as gospel is to ask yourself what possible scenarios have not been covered by the test.)

So if you say IOT what are the obvious tests once you’ve got past the initial step of loading the IOT and seeing what happens. First, I think, would be “What if the IOT weren’t empty before the test started”; second would be “IOTs can have overflow segments, what impact might one have?”; third would be “Do secondary indexes have any effects?”; finally “What happens with bitmap indexes and the requirement for a mapping table?” (Then, of course, you can worry about mixing all the different possibilities together – but for the purposes of this note I’m just going to play with two simple examples: non-empty starting tables, and overflow segments.)

Here’s some code to define a suitable table:


create table t2 
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
	3 * rownum			id,
	lpad(rownum,10,'0')		v1,
	lpad('x',50,'x')		padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
order by
	dbms_random.value
;

begin
	dbms_stats.gather_table_stats(
		ownname     => null,
		tabname     => 'T2',
		method_opt  => 'for all columns size 1'
	);
end;
/

create table t1(
	id,
	v1,
	padding,
	constraint t1_pk primary key(id)
)
organization index
-- including v1
-- overflow
nologging
as
select * from t2
;

begin
	dbms_stats.gather_table_stats(
		ownname     => null,
		tabname     => 'T1',
		method_opt  => 'for all columns size 1'
	);
end;
/

I’ve created a heap table t2 that holds 100,000 rows with an id column that is randomly ordered; then I’ve used this ta1ble as a source to create an IOT, with the option to have an overflow segment that contains just the 100 character padding columns. I’ve used 3 * rownum to define the id column for t2 so that when I insert another copy of t2 into t1 I can add 1 (or 2) to the id and interleave the new data with the old data. (That’s another thought about IOT testing – are you loading your data in a pre-existing order that suits the IOTs or is it arriving in a way that’s badly out of order with respect to the IOT ordering; and does your data go in above the current high value, or spread across the whole range, or have a partial overlap with the top end of the range and then run on above it.)

Have created the starting data set, here’s the test:


execute snap_my_stats.start_snap
execute snap_events.start_snap

insert 
	/*  append */
into t1
select
	id + 1, v1, padding
from
	t2
;


execute snap_events.end_snap
execute snap_my_stats.end_snap

All I’m doing is using a couple of my snapshot packages to check the work done and time spent while insert 100,000 interleaved rows – which are supplied out of order – into the existing table. As shown the “append” is a comment, not a hint, so I’ll be running the test case a total of 4 times: no overflow, with and without the hint – then with the overflow, with and without the hint. (Then, of course, I could run the test without the overflow but an index on v1).

Here are some summary figures from the tests – first from the test without an overflow segment:

                                      Unhinted       With Append
                                  ============      ============
CPU used when call started                 153               102
CPU used by this session                   153               102
DB time                                    166               139

redo entries                           130,603            42,209
redo size                           78,315,064        65,055,376

sorts (rows)                                30           100,031

You’ll notice that with the /*+ append */ hint in place there’s a noticeable reduction in redo entries and CPU time, but this has been achieved at a cost of sorting the incoming data into order. The reduction in redo (entries and size) is due to an “array insert” effect that Oracle can take advantage of with the delayed index maintenance that takes place when the append hint is legal (See the section labelled Option 4 in this note). So even with an IOT with no overflow there’s a potential benefit to gain from direct path loading that depends on how much the new data overlaps the old data, and there’s a penalty that depends on the amount of sorting you’d have to do.

What happens in my case when I move the big padding column out to an overflow segment – here are the equivalent results:


Headline figures                      Unhinted       With Append
================                  ============      ============
CPU used when call started                 158                52
CPU used by this session                   158                52
DB time                                    163                94
redo entries                           116,669            16,690
redo size                           51,392,748        26,741,868
sorts (memory)                               4                 5
sorts (rows)                                33           100,032

Interestingly, comparing the unhinted results with the previous unhinted results, there’s little difference in the CPU usage between having the padding column in the “TOP” section of the IOT compared to having it in the overflow segment, though there is a significant reduction in redo (the index entries are still going all over the place one by one, but the overflow blocks are being pinned and packed much more efficiently). The difference between having the append hint or not, though, is damatic. One third of the CPU time (despited still having 100,000 rows to sort), and half the redo. One of the side effects of the overflow, of course, is that the things being sorted are much shorted (only the id and v1 columns that go into the TOP section, and not the whole IOT row.

So, if you already have an overflow segment that caters for a significant percentage of the row, it looks as if the benefit you could get from using the /*+ append */ hint would far outweigh the penalty of sorting you have to pay. Of course, an IOT with a large overflow doesn’t look much different from a heap table with index – so perhaps that result isn’t very surprising.

I’ll close by re-iterating Connor’s closing comment:

Direct mode insert is a very cool facility, but it doesn’t mean that it’s going to be the best option in every situation.

Before you dive in and embrace it, or ruthlessly push it to one side, make sure you do some testing that reflects the situations you have to handle.

Oracle Named a Category Leader for Data Integrity and Control by Chartis Research

Oracle Press Releases - Mon, 2018-07-16 07:00
Press Release
Oracle Named a Category Leader for Data Integrity and Control by Chartis Research Recognizes Oracle Financial Services’ Completeness of Offering and Market Potential

Redwood Shores, Calif.—Jul 16, 2018

Today Oracle Financial Services announced it has been named by Chartis Research as a Category Leader for Data Integrity and Control (DI&C) on Chartis’ FinTech Quadrant for DI&C solutions in Financial Services.

For its report Data Integrity and Control in Financial Services 2018, Chartis considered two key dimensions when evaluating vendors: completeness of offering and market potential. Oracle’s positioning in the quadrant highlights its depth and breadth of functionality, technology and content.

“We are proud to be recognized by Chartis as a category leader for our data integrity and control capabilities,” said Ambreesh Khanna, Group Vice President, Financial Services Analytical Applications, Oracle Financial Services. “As the volume of data and number of industry regulatory requirements keeps growing, financial institutions are in need of robust data governance, quality and controls more than ever before. Our placement on the quadrant re-affirms our status as a best-in-class provider with an unrivaled array of services and capabilities, and we are committed to continuously strengthening our toolset.”

By positioning Oracle as a category leader, Chartis recognizes the range and diversity of its offerings, its sector coverage, and the financial strength it has to absorb demand volatility in specific industry sectors or geographic regions. Chartis also rated Oracle for its strong alliance strategies with leading consulting firms and systems integrators.

“Oracle combines deep domain knowledge in various risk topics with strong technology assets and capabilities,” said Rob Stubbs, Head of Research at Chartis. “The company is positioned to address a range of needs, from complex risk management and technology to standardized and integrated solutions.”

The 2018 market report by Chartis examines how the dynamics in this space have shifted since its last update in 2016, and the implications for Financial Institutions and technology vendors. The FinTech Quadrant is a proprietary methodology developed specifically for the financial technology marketplace. It takes into account vendors’ product, technology and organizational capabilities.

Chartis’ FinTech Quadrant reports are written by experienced analysts with hands-on experience of selecting, developing and implementing financial technology solutions for a variety of international companies in a range of industries including banking, insurance and capital markets. The findings and analyses in the FinTech Quadrant reports reflect their analysts’ considered opinions, along with research into market trends, participants, expenditure patterns, and best practices.

More information and a copy of the full report can be found on Chartis’ website here.

Contact Info
Judi Palmer
Oracle
+1-650-607-6598
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
About Chartis

Chartis is a research and advisory firm that provides technology and business advice to the global financial services industry. Chartis provides independent market intelligence on market dynamics, regulatory trends, technology trends, best practices, competitive landscapes, market sizes, expenditure priorities, and mergers and acquisitions. Chartis’ FinTech Quadrant reports are written by experienced analysts with hands-on experience of selecting, developing and implementing financial technology solutions for a variety of international companies in a range of industries including banking, insurance and capital markets.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1-650-607-6598

Brian Pitts

  • +1 312 475 5921

Global Businesses Turn to Oracle Blockchain Service to Speed Transactions Securely

Oracle Press Releases - Mon, 2018-07-16 07:00
Press Release
Global Businesses Turn to Oracle Blockchain Service to Speed Transactions Securely Arab Jordan Investment Bank, CargoSmart, and Intelipost are among early adopters of enterprise-class general ledger service

Redwood Shores, Calif.—Jul 16, 2018

Businesses around the world have been deploying an early adopter version of Oracle Blockchain Cloud Service, which becomes generally available today. The service allows organizations to easily build blockchain networks to drive more secure and efficient transactions and to track goods through supply chains on a global scale. Arab Jordan Investment Bank, CargoSmart, Certified Origins, Indian Oil, Intelipost, MTO, Neurosoft, Nigeria Customs, Sofbang, Solar Site Design and TradeFin are among the many global organizations that already have adopted Oracle’s blockchain platform.

Blockchain has the power to fundamentally transform how every industry does business by making interactions more secure, transparent, efficient and cost-effective. Oracle Blockchain Cloud Service provides customers with a development platform to build their own networks, and to quickly integrate with Oracle SaaS and third-party applications they already use, as well as other blockchain networks and Oracle PaaS services. It also enables users to provision blockchain networks, join other organizations, and deploy and run smart contracts to update and query the ledger. Oracle’s blockchain platform leverages the company’s decades of experience across industries and its extensive partner ecosystem to reliably share and conduct trusted transactions with suppliers, banks, and other trade partners through blockchain.

“Blockchain promises to be one of the most transformative technologies of our generation,” said Amit Zavery, executive vice president, Oracle Cloud Platform. “We are excited to announce the availability of Oracle Blockchain Cloud Service. It is the result of years of R&D alongside our valued partners and customers. With Oracle’s platform, enterprises can enhance their business, eliminate unnecessary processes, and transact with their distributed networks more easily, transparently and securely than ever before.”

Oracle’s blockchain platform is built on top of The Linux Foundation’s Hyperledger Fabric. It is pre-assembled with all the underlying infrastructure dependencies, container lifecycle management, event services, identity management, REST proxy, and a number of operations and monitoring tools integrated under a single console, expediting the set-up and application development process. Oracle Blockchain Cloud Service is an Oracle-managed cloud platform backed by a 99.95 percent availability SLA, with built-in high availability configuration, autonomous recovery agents, as well as continuous ledger backup capabilities that can enable multi-datacenter disaster recovery across availability domains.

It further benefits from broad capabilities in Oracle Cloud Platform for plug-and-play integration with existing cloud and on-premises applications, API management, and application development environments and tools. Additionally, Oracle is delivering new SaaS applications to use blockchain technology for common use cases, such as track and trace, provenance identification, warranty and usage, and cold chain. The plug-and-play ability with Oracle and third-party applications results in faster integration with diverse systems of record; greatly accelerating time to market and multiplying the returns from using the blockchain platform across different application use cases.

“Blockchain projects are quickly moving from pilot to production as enterprises and governments begin to see the inherent value of distributed ledgers and smart contracts,” said Robert Parker, group vice president of manufacturing and retail insights, IDC. “As spending accelerates, buyers will need an enterprise class platform beyond open source that includes data security and integrity, scalability, manageability, and interoperability.”

Quotes from Global Organizations and Consortia Using Oracle Blockchain Cloud Service

Blockchain has the power to impact almost all industries and has applicability to verticals from transportation, supply chain and logistics, energy, retail and ecommerce, to financial services, telecommunications and public sector. Organizations and industry consortia are already using Oracle Blockchain Cloud Service to help ease transactions, track goods through supply chains, and reduce costs, including global shipping leaders, multi-national manufacturers, food producers and energy marketplaces. For example, Oracle joined the Food Safety Consortium and is a member of The Alliance for Telecommunications Industry Solutions, participating in its working group on blockchain.

“Oracle’s blockchain platform has helped us minimize the complexity of electronic fund transfers by reducing cost, increasing efficiency and security levels, and ultimately improving the overall customer experience,” said Ayman Qadoumi, A. Deputy General Manager, Information Technology and Security, Arab Jordan Investment Bank. “The built-in features such as identity management and data encryption made it an ideal choice given our industry requirements and compliance needs. Additionally, the REST APIs helped us and our vendors accelerate application development and integration with existing core services.”

“We are using Oracle Blockchain Cloud Service to develop an application to help simplify the complex documentation processes that plague the shipping industry,” said Steve Siu, CEO, CargoSmart. “Its comprehensive nature has allowed us to quickly turn prototypes into viable products in several 12-week sprints and so far, we have seen proven productivity gains of more than 30 percent compared to other blockchain platforms we tested. Another key purchasing requirement were its integrated management and operations tools, which allow business partners to monitor all of their blockchain activities and the health of the network.”

“As the producer of Extra Virgin Olive Oil from fruits grown in small family farms in Italy, we wanted to trace our product that we sell in the US market branded Bellucci Evoo across the entire supply chain – ‘from the tree to the shelf,’” said Andrea Biagianti, CIO of Certified Origins. “The availability of Oracle’s blockchain service simplifies the implementation and collaboration of all parties included, and represents a real competitive advantage for us. It adds a further level of transparency and information that is valuable for consumers looking for quality products and helps us to support the excellence of the small farms.”

Dr. Terence Lau, Convener of the Food Safety Consortium (FSC), welcomed Oracle as a Corporate Member of the non-profit technology development platform established by The Hong Kong Polytechnic University. “FSC is committed in advancing global food safety with science and technology,” said Dr. Lau. “With an increasing demand on multidisciplinary expertise in tackling challenges in food safety and quality, FSC is delighted to partner with Oracle to leverage blockchain, big data, and other IT applications to provide better food provenance, tracking and safety. We look forward to exploring Oracle Blockchain Cloud Service to improve and accelerate the Food Safety initiative in Hong Kong, as well as globally.”

“Companies are increasingly testing and seeing the value blockchain technologies can offer their organizations, from streamlining internal processes to building trusted networks with partners, customers and third parties,” said Brian Behlendorf, Executive Director, Hyperledger. “Based on Hyperledger Fabric, Oracle’s platform, with its support for interoperability with non-Oracle Hyperledger Fabric instances, will help build further support for open standards and interoperability, while illustrating how quickly, easily and securely businesses can begin utilizing blockchain to improve business processes.”

“Oracle’s blockchain platform is agile, easy to develop on and its enterprise-grade scalability makes it an ideal choice for deploying blockchain platforms rapidly,” said Juan José Miranda, blockchain project manager, Magia.Digital. “Based on our experience with the platform, we even chose it as our main development platform for the New York Consensus’ Hackathon 2018. With its features, we were able to impress the judges and ultimately be named winner of the contest.”

“Once we decided that blockchain would be the ideal technology to help bring our ‘From Mileage to Opportunity’ – M2O – loyalty program to life, we went through a rigorous approach to find the right vendor,” said Suwan Kim, CEO, MTO. “We were confident that Oracle’s pre-assembled blockchain cloud service could quickly help us realize a trusted platform that empowers users to convert credit card points and air miles to tokens. It also provides us with easy, ongoing management and all the underlying security, scalability, and resilience that are needed for a production solution like this.”

“We used Oracle’s blockchain to build a trusted platform for the automation of Customs Excise Trade business processes and procedures,” said Aber T Benjamin, Assistant Comptroller General Modernization, Nigeria Customs Service. “Using this technology, we found the entire business environment can be migrated to blockchain to automate processes and create transparency and predictability. Once the transition to blockchain is completed, NCS expects revenue growth increase of about 50 percent. This technology helps our organization to build global trust for Nigerian businesses through irrefutable data on goods manufactured in the country.”

“Using Oracle’s blockchain platform in factoring and supply chain finance seems a natural fit that enables us to go from a fragmented process along heterogeneous systems to a common base, that can be trusted by all, used as a reference, resolve conflicts and streamline processes,” said Nikolaos Vasilonikolidakis, CEO, Neurosoft. “Risk mitigation in factoring is paramount and Oracle’s blockchain brings common consensus that helps settle transactions in near real-time.”

“Oracle’s blockchain platform is enabling Sofbang to build smart contract supply chain solutions for our customers,” said Michael Ribet, product development manager, Sofbang. “We’re delighted with the rich functionality, robust scalability of the service and how it’s designed to integrate blockchain technology with current business critical applications seamlessly. Our customers are anticipating up to 35 percent efficiency gains by communicating more reliably and rapidly with their trading partners through Oracle Blockchain Cloud Service.”

“We helped found the Energy Blockchain Network earlier this year after realizing stakeholders in the solar project ecosystem needed a single source of truth with immutable records, so they can trust project status data,” said Jason Loyet, CEO of Solar Site Design. “We can now program each project’s status, list requirements to move forward, and rewards value contributions in near real-time. We are really happy with how easily NetSuite SuiteCloud Development Platform and Oracle‘s blockchain platform integrate with pre-packaged blockchain APIs, allowing us to explore new ways to develop smart contracts, manage the projects and reward programs.”

“As a company dedicated to making business-to-business payments and supply chain finance secure, frictionless and ubiquitous using blockchain, we are able to significantly accelerate the time to onboard corporations, their suppliers and banks by using Oracle’s blockchain platform,” said Amit Baid, CEO, TradeFin. “It provides a REST API-driven platform with rich integration options in Oracle Cloud Platform, allowing us to quickly onboard existing customers. Additionally, Oracle Scaleup Ecosystem provides access to the platform itself, cloud credits, mentoring, and a number of Oracle resources that can help start-ups like ours grow quickly.”

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Jesse Caputo
Oracle
+1.650.506.5697
jesse.caputo@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Jesse Caputo

  • +1.650.506.5697

List of Top Online Wireframe SAAS Tools – 2018

Nilesh Jethwa - Sun, 2018-07-15 23:07

Top 10 Wireframe Software for Cloud Web designers and developers use cloud-based wireframe tools to collaborate with their team and also send clients the online version of the product design. Some of these tools enable you to establish a usability … Continue reading ?

Via: MockupTiger Wireframes

Drilling down the pgSentinel Active Session History

Yann Neuhaus - Sun, 2018-07-15 13:35

In pgSentinel: the sampling approach for PostgreSQL I mentioned that one of the advantages of the ASH approach is the ability to drill down from an overview of the database activity, down to the details where we can do some tuning. The idea is to always focus on the components which are relevant to our tuning goal:

  • Filter/Group by the relevant dimension to focus on what you want to tune (a program, session, query, time window,…)
  • Sort by the most active samples, to spend time only where you know you can improve significantly

The idea is to start at a high level. Here is a GROUP BY BACKEND_TYPE to show the activity of the ‘client backend’ and the ‘autovacuum worker':

select count(*), backend_type
from pg_active_session_history
where ash_time>=current_timestamp - interval '5 minutes'
group by backend_type
order by 1 desc
;
count | backend_type
-------+-------------------
1183 | client backend
89 | autovacuum worker

I selected only the last 5 minutes (the total retention is defined by pgsentinel_ash.max_entries and the sampling frequency by pgsentinel_ash.pull_frequency).

I ordered by the number of samples for each one, which gives a good idea of the proportion: most of the activity here for ‘client backend’. It may be more interesting to show a percentage, such as 93% activity is from the client and 7% is from the vacuum. However, this removes an interesting measure about the overall activity. The fact that we have 1183 samples within 5 minutes is an indication of the total load. In 5 minutes, we have 300 seconds, which means that each session can have 300 samples, when being 100% active in the database during that time. 1183 samples during 5 minutes mean that we have on average 1183/300 = 4 sessions active. This measure, calculated from the number of samples divided by the number of seconds, and known as Average Active Sessions (AAS) gives two different piece of information:

  • The overall activity in the database, similar to the load average at OS level
  • The relative activity of an aggregate (per session, program, event, time…)
AAS (Average Active Sessions)

In the previous post I counted the number of samples with count(distinct ash_time) because I knew that I had several sessions active during the whole time. But if there are periods of inactivity during those 5 minutes, there are no samples at all. And when drilling down to more detail, there will be some samples with no activity for a specific group. Here I calculate the number of seconds covered by the samples, using a window function:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type
from ash
group by samples,
backend_type
order by 1 desc fetch first 20 rows only
;
AAS | backend_type
-------+-------------------
3.95 | client backend
0.29 | autovacuum worker
(2 rows)

From this output, I know that I have about 4 client sessions running. This is what I want to tune.

Drill down on wait events

Adding the WAIT_EVENT_TYPE to the GROUP BY, I can have more detail about the resources used by those sessions:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type,wait_event_type
from ash
group by samples,
backend_type,wait_event_type
order by 1 desc fetch first 20 rows only
;
AAS | backend_type | wait_event_type
-------+-------------------+-----------------
2.57 | client backend | IO
0.94 | client backend | CPU
0.45 | client backend | LWLock
0.16 | autovacuum worker | CPU
0.12 | autovacuum worker | IO
0.00 | autovacuum worker | LWLock
(6 rows)

This gives a better idea about which system component may be tuned to reduce the response time or the throughput. IO is the major component here with 2.57 AAS being on an I/O call. Let’s get more information about which kind of I/O.

Drilling down to the wait event:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type,wait_event_type,wait_event
from ash
group by samples,
backend_type,wait_event_type,wait_event
order by 1 desc fetch first 20 rows only
;
AAS | backend_type | wait_event_type | wait_event
-------+-------------------+-----------------+------------------
1.52 | client backend | IO | DataFileWrite
0.94 | client backend | CPU | CPU
0.46 | client backend | IO | DataFileExtend
0.41 | client backend | IO | DataFileRead
0.33 | client backend | LWLock | WALWriteLock
0.15 | autovacuum worker | CPU | CPU
0.12 | client backend | LWLock | buffer_mapping
0.10 | autovacuum worker | IO | DataFileRead
0.08 | client backend | IO | WALInitWrite
0.08 | client backend | IO | BufFileWrite
0.02 | client backend | IO | WALWrite
0.01 | autovacuum worker | IO | DataFileWrite
0.01 | client backend | IO | DataFilePrefetch
0.00 | client backend | LWLock | buffer_content
0.00 | autovacuum worker | LWLock | buffer_mapping
(15 rows)

This gives more information. The average 2.57 sessions active on IO are actually writing for 1.52 of them, reading for 0.46 of them, and waiting for the datafile to be extended for 0.46 of them. That helps to focus on the areas where we might improve the performance, without wasting time on the events which are only a small part of the session activity.

Drill-down on queries

This was a drill-down on the system axis (wait events are system call instrumentation). This is useful when we think something is wrong on the system or the storage. But performance tuning must also drive the investigation on the application axis. The higher level is the user call, the TOP_LEVEL_QUERY:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type,top_level_query
from ash
group by samples,
backend_type,top_level_query
order by 1 desc fetch first 20 rows only
;
AAS | backend_type | top_level_query
-------+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0.95 | client backend | SELECT * FROM mypgio('pgio3', 50, 3000, 131072, 255, 8);
0.95 | client backend | SELECT * FROM mypgio('pgio2', 50, 3000, 131072, 255, 8);
0.95 | client backend | SELECT * FROM mypgio('pgio4', 50, 3000, 131072, 255, 8);
0.95 | client backend | SELECT * FROM mypgio('pgio1', 50, 3000, 131072, 255, 8);
0.25 | autovacuum worker | autovacuum: VACUUM ANALYZE public.pgio2
0.02 | client backend | commit;
0.01 | client backend | select * from pg_active_session_history where pid=21837 order by ash_time desc fetch first 1 rows only;
0.01 | client backend | with ash as ( +
| | select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples +
| | from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes' +
...

Here I see 4 user calls responsible for most of the 4 active sessions related to the ‘client backend’, each one with AAS=0.95 and this is actually what is running: the PGIO benchmark (see https://kevinclosson.net/) with 4 sessions calling mypgio function.

The function we see in TOP_LEVEL_QUERY is itself running some queries, and the big advantage of the pgSentinel extension, over pg_stat_activity, is the capture of the actual statement running, with the actual values of the parameters:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type,substr(query,1,100)
from ash
group by samples,
backend_type,substr(query,1,100)
order by 1 desc fetch first 20 rows only
;
AAS | backend_type | substr
-------+-------------------+----------------------------------------------------------------------------------------
0.26 | autovacuum worker |
0.02 | client backend | commit
0.02 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 3567 AND 3822
0.01 | client backend | SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN 5729 AND 5984
0.01 | client backend | SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN 5245 AND 5500
0.01 | client backend | truncate table l_ash.ps
0.01 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 3249 AND 3504
0.01 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 57 AND 312
0.01 | client backend | UPDATE pgio4 SET scratch = scratch + 1 WHERE mykey BETWEEN 3712 AND 3720
0.01 | client backend | SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN 1267 AND 1522
0.01 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 703 AND 958
0.01 | client backend | SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN 2025 AND 2280
0.01 | client backend | insert into l_ash.ps_diff +
| | select ps1.pid,ps1.uname,ps1.pr,ps1.ni,ps1.virt,ps1.res,ps1.shr,ps1.s,ps1.
0.01 | client backend | UPDATE pgio4 SET scratch = scratch + 1 WHERE mykey BETWEEN 2690 AND 2698
0.01 | client backend | SELECT sum(scratch) FROM pgio3 WHERE mykey BETWEEN 5463 AND 5718
0.01 | client backend | SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN 1467 AND 1722
0.01 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 4653 AND 4908
(20 rows)

Here, no session is at the top. We have a few samples for each execution. This is because each execution is different (different values for the parameters) and they have a balanced execution time. If we had one query being longer with one specific set of parameter values, it would show up at the top here.

Finally, we can also aggregate at a higher level than QUERY with QUERYID which is per prepared statement and do not change when executing with different parameter values. If we want to get the text, then we can join with PG_STAT_STATEMENTS

with ash as (
select *,datid dbid,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",dbid,
backend_type,queryid,pg_stat_statements.query
from ash left outer join pg_stat_statements using(dbid,queryid)
group by samples,dbid,
backend_type,queryid,pg_stat_statements.query
order by 1 desc fetch first 15 rows only
;
AAS | dbid | backend_type | queryid | query
-------+-------+----------------+------------+------------------------------------------------------------------------------------------------------
0.89 | 17487 | client backend | 837728477 | SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN 100926 AND 101181
0.70 | 17487 | client backend | 3411884874 | SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN $1 AND $2
0.68 | 17487 | client backend | 1046864277 | SELECT sum(scratch) FROM pgio3 WHERE mykey BETWEEN 1591 AND 1846
0.67 | 17487 | client backend | 2994234299 | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN $1 AND $2
0.33 | 17487 | client backend | 1648177216 | UPDATE pgio1 SET scratch = scratch + 1 WHERE mykey BETWEEN 2582 AND 2590
0.32 | 17487 | client backend | 3381000939 | UPDATE pgio3 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3
0.30 | 17487 | client backend | 1109524376 | UPDATE pgio4 SET scratch = scratch + 1 WHERE mykey BETWEEN 5462 AND 5470
0.11 | 17487 | client backend | 3355133240 | UPDATE pgio2 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3
0.05 | 17547 | client backend | 2771355107 | update l_ash.parameters set value=now(),timestamp=now() where name=$1
0.05 | 17547 | client backend | 1235869898 | update l_ash.parameters set value=$1,timestamp=now() where name=$2
0.02 | 13806 | client backend | 935474258 | select * from pg_active_session_history where pid=$1 order by ash_time desc fetch first $2 rows only
0.01 | 13806 | client backend | 164740364 | with ash as ( +

This shows the main queries running: SELECT and UPDATE on the PGIO1,PGIO2,PGIO3,PGIO4. They run with different parameter values but have the same QUERYID. It seems that PG_STAT_STATEMENTS is not very consistent when capturing the query text: some show the parameter, some other show the values. But you must know that those are the prepared statements. We do not have 0.89 average sessions running the ‘SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN 100926 AND 101181′. This is the ‘SELECT sum(scratch) FROM pgio2′ running with different parameter values and for whatever reasons, the PG_STAT_STATEMENTS extension displays one of the set of values rather than ‘BETWEEN $1 AND $2′.

Time dimension

Of course we can also query all samples and drill-down with a graphical tool. For the time axis, this is a better visualization. Here is a quick Excel PivotChart from those 5 minutes samples:
pg_active_session_history
I always have 4 sessions running, as we have seen in the average, but the wait event detail is not uniform during the timeline. This is where you will drill down on the time axis. This can be helpful to investigate a short duration issue. Or to try to understand non-uniform response time. For example, coming from Oracle, I’m not used to this pattern where, from one second to the other, the wait profile is completely different. Probably because of all the background activity such as Vacuum, WAL, sync buffers to disk, garbage collection,… The workload here, PGIO, the SLOB method for PostgreSQL, is short uniform queries. It would be interesting to have some statistics about the response time variation.

Note that in this database cluster, in addition to the PGIO workload, I have a small application running and committing very small changes occasionally and this why you see the peaks with 1 session on WALWrite and 4 sessions waiting on WALWriteLock. This adds to the chaos of waits.

This extension providing active session sampling is only the first component of pgSentinel so do not spend too much time building queries, reports and graphs on this and let’s see when will come with pgSentinel:

pgSentinel is in progress….@postgresql @amplifypostgres @PostgreSQLFR @BertrandDrouvot @ckikof pic.twitter.com/Pwq8vB69MI

— pgSentinel (@Pg_Sentinel) July 11, 2018

 

Cet article Drilling down the pgSentinel Active Session History est apparu en premier sur Blog dbi services.

Oracle SOA Suite: Profiles (SOA Foundation, BPEL only, Orchestration, Enterprise, BPM Classic, ...)

Dietrich Schroff - Sun, 2018-07-15 07:59
Inside the Enterprise Manager (URL: http://yourhost:7001/em) you can configure SOA Suite profiles:

Choose "Common Properties"

Click on "Change Profile"
The oracle documentation shows the following:

The full set of adapters:
  • file,
  • FTP, 
  • database, 
  • JMS, 
  • MQSeries, 
  • AQ, 
  • E-Business Suite, 
  • User Messaging Service, 
  • socket, 
  • LDAP, 
  • Coherence, 
  • MSMQ,
  • JDE
The limited set of adapters:
  • file, 
  • FTP, 
  • database, 
  • JMS, 
  • MQSeries, 
  • AQ, 
  • E-Business Suite,
  • User Messaging Service

I'll be at APEX Meetup Munich: Thu 19 Jul 2018

Dimitri Gielis - Sun, 2018-07-15 06:14
Just a quick note I'll do two presentations at the APEX Meetup in Munich on Thursday, July 19th 2018.

In the first presentation I'll bring you to a virtual and augmented world, entirely build in Oracle Application Express (APEX). There are 30 Google Cardboards available to make the experience complete. Fun guaranteed! :)


At KScope I was also interviewed by Bob Rhubart on my talks over there, which the AR/VR presentation was one of them.


In my second presentation at Munich I'll show the upcoming version of APEX Office Print (AOP).
I'll show some features nobody has seen before :) With every major release of AOP I feel like this:


If you are in the Munich area I would love to meet you at the meetup.

Categories: Development

After motherboard change on BDA server eth0 network interface is missing

Alejandro Vargas - Sun, 2018-07-15 03:55

Recently I had a BDA server that was refusing to come up. Every trial to run a start /SYS failed.

We were able to identify this as a motherboard failure, then the motherboard was replaced and we were able to bring up the server using the ILOM.

Back on track I've tried to setup the management network, but this failed because the eth0 NIC was missing.

[root@bdanode01 ~]# ethtool -i eth0 Cannot get driver information: No such device

But I've noticed we had eth1 available, nevertheless not functioning.

[root@bdanode01 ~]# ethtool -i eth1 driver: igb version: 5.3.0-k firmware-version: 3.25, 0x80000681 bus-info: 0000:02:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no

First step was to check that the mac address of eth0 was correct, for that I've checked it on the ILOM cli and on file /etc/sysconfig/network-scripts/ifcfg-eth0

-> show /SYS/MB/NET0 /SYS/MB/NET0 Targets: Properties: type = Network Interface ipmi_name = MB/NET0 fru_description = 1G Ethernet Controller fru_manufacturer = INTEL fru_part_number = i210 fru_macaddress = 00:10:e0:de:5d:84 fault_state = OK clear_fault_action = (none) On  /etc/sysconfig/network-scripts/ifcfg-eth0 we have a missing mac address, so I've added it.

Then I've checked /etc/udev/rules.d/70-persistent-net.rules

On this file we had the mismatch. It seems the mac address from the previous motherboard remained in place for eth0 while the new one was assigned to eth1

Then I've edited the file leaving only one line for eth0 and with the correct mac address

[root@bdanode01 ~]# cat /etc/udev/rules.d/70-persistent-net.rules

# This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. ## You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x8086:0x1533 (igb) (custom name provided by external tool) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:10:e0:de:5d:84", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

After reboot eth0 was back to work and I was able to setup the network.
 

Categories: DBA Blogs

ADF Postback Payload Size Optimization

Andrejus Baranovski - Sun, 2018-07-15 01:45
Recently I came across property called oracle.adf.view.rich.POSTBACK_PAYLOAD_TYPE. This property helps to optimize postback payload size. It is described in ADF Faces configuration section - A.2.3.16 Postback Payload Size Optimization. ADF partial request is executing HTTP post with values from all fields included. When postback property is set to dirty, it will include into HTTP post only changed values. As result - server will get only changed attributes, potentially this can reduce server time processing and make HTTP request size smaller. This especially can be important for large forms, with many fields.

Let's take a look into example. After clicking on any button in the form, go to network monitor and study Form Data section. You will see ID's and values for all fields included in the UI. All fields are submitted with HTTP request by default, even these fields were not changed:


Postback optimization property can be set in web.xml. By default it's value is full, change it to dirty:


With value set to dirty, try to change at least one field and then press any button. Observe Form Data section in network monitor - only fields with changed values will be submitted:


Try to test it in your project and see the difference.

Check my sample app for this use case on GitHub.

Quiz Night

Jonathan Lewis - Sat, 2018-07-14 13:07

Because it’s been a long time since the last quiz night.  Here’s a question prompted by a recent thread on the ODevCom database forum – how many rows will Oracle sorts (assuming you have enough rows to start with in all_objects) for the final query, and how many sort operations will that take ?


drop table t1 purge;

create table t1 nologging as select * from all_objects where rownum < 50000;

select owner, count(distinct object_type), count(distinct object_name) from t1 group by owner;

Try to resist the temptation of doing a cut-n-paste and running the code until after you’ve thought about the answer.

My top 5 APEX 18.1 Plugins

Dimitri Gielis - Sat, 2018-07-14 05:46
With every new version of Oracle Application Express (APEX) new features are added and the life of a developer is made even easier. If the feature set is not enough or you see you need to build the same functionality more often, you can always extend APEX with plug-ins.

There are six different types of plugins: dynamic action, item,  region, process, authentication scheme and authorization scheme.

Plug-ins are absolutely fantastic to extend the native functionalities of APEX in a declarative way. The APEX plugin becomes a declarative option in the APEX Builder and has the [Plug-in] text next to it. In the next screenshot, you see the dynamic actions being extended by two Plug-ins.


If you are searching for an APEX plug-in, I typically go to APEX World > Plug-ins. The nice thing about that site is that the plug-ins seem to be maintained, so if a plug-in is not supported anymore it gets the status deprecated.

!! And here lays the catch with using plug-ins. When you decide to use a plug-in in your project, you become responsible for this and need to make sure it's compatible with every release of Oracle APEX. Many plug-ins are open source and many plug-in developers maintain their plug-ins, but it's really important you understand that at the end you are responsible for things you put in your application. If the plug-in is not secure or it breaks in the next release of APEX, you need to find a solution. So use plug-ins with care and see for example how many likes the plug-in has or what the comments are about the plug-in or author. Oracle is not reviewing or supporting the plug-ins !!

When I saw the tweet of Travis, I thought to do a blog post on my top 5 plugins I use in almost every project.


Here we go:

1. Built with love using Oracle APEX

I'm proud to built applications with Oracle Application Express, and this plug-in makes it very clear :) At the bottom of the app, you will see this text:


Note that in Oracle APEX 18.1 this text in included by default and you don't even need to add the plugin. Nevertheless, I wanted to include it in this list as it should be there in every app, even the ones built before APEX 18.1 :)

2. Select2

When a select list (or drop-down) has many values, it takes too long to find the right value. Select2 makes it easy to search for values, it also supports lazy loading and multiple select.


3. APEX Office Print

APEX Office Print extends APEX so it becomes possible to export to native Excel files and generate documents in Word, Powerpoint, PDF, HTML and Text, all based on your own template. It has many more features, I blogged about some before.



4. Dropzone

APEX 18.1 has declarative multi-file upload, but still, I love the Dropzone plugin developed by Daniel Hochleitner. You can drag multiple files from your desktop straight in your APEX app. Daniel is one of my favorite plug-in developers. When he releases something, you know it will be good.



5. Modal LOV

This is a newer plugin and I haven't used it that much yet, but I'm sure I will do. The nice thing with this item type plugin is that it also supports Interactive Grid. Where Select2 stays within the page, this Modal LOV comes with a modal list of values (pop-up) which is great if you want to show multiple columns or need more context for the record you need to select.


There are many more plug-ins out there, most of them work on APEX 5.x and upwards. For example, Pretius has some cool plug-ins too, the one to create nested reports I recently used in a project. Another site you can find plug-ins is APEX-Plugin.com.

Categories: Development

PGDay Amsterdam – follow up 2 – Where do null values go to in a hash partitioned table?

Yann Neuhaus - Fri, 2018-07-13 23:17

This is the second follow up which covers this question: When you hash partition a table in PostgreSQL 11 where do null values for the partitioned column go to? Lets go…

In the demo I used this little table:

postgres=# select version();
                                                            version                                                          
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11beta1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bi
(1 row)
postgres=# create table part2 ( a int, list varchar(10) ) partition by hash (a);
CREATE TABLE
postgres=# create table part2_1 partition of part2 FOR VALUES WITH (MODULUS 3, REMAINDER 0);
CREATE TABLE
postgres=# create table part2_2 partition of part2 FOR VALUES WITH (MODULUS 3, REMAINDER 1);
CREATE TABLE
postgres=# create table part2_3 partition of part2 FOR VALUES WITH (MODULUS 3, REMAINDER 2);
CREATE TABLE
postgres=# \d+ part2
                                          Table "public.part2"
 Column |         Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
--------+-----------------------+-----------+----------+---------+----------+--------------+-------------
 a      | integer               |           |          |         | plain    |              | 
 list   | character varying(10) |           |          |         | extended |              | 
Partition key: HASH (a)
Partitions: part2_1 FOR VALUES WITH (modulus 3, remainder 0),
            part2_2 FOR VALUES WITH (modulus 3, remainder 1),
            part2_3 FOR VALUES WITH (modulus 3, remainder 2)

The data we played with was this:

postgres=# insert into part2 (a,list) values (1,'beer');
INSERT 0 1
postgres=# insert into part2 (a,list) values (2,'whine');
INSERT 0 1
postgres=# insert into part2 (a,list) values (3,'schnaps');
INSERT 0 1
postgres=# select * from only part2_1;
 a | list  
---+-------
 2 | whine
(1 row)

postgres=# select * from only part2_2;
 a |  list   
---+---------
 3 | schnaps
(1 row)

postgres=# select * from only part2_3;
 a | list 
---+------
 1 | beer
(1 row)

We have the data evenly distributed over the three partitions. When we insert a row which contains a NULL value for the column we partitioned on:

postgres=# insert into part2 (a,list) values (null,'cocktail');
INSERT 0 1

… where does that column go to?

postgres=# select * from only part2_1;
 a |   list   
---+----------
 2 | whine
   | cocktail
(2 rows)

postgres=# select * from only part2_2;
 a |  list   
---+---------
 3 | schnaps
(1 row)

postgres=# select * from only part2_3;
 a | list 
---+------
 1 | beer
(1 row)

It goes to the first partition and every following NULL key row gets there as well:

postgres=# insert into part2 (a,list) values (null,'rum');
INSERT 0 1
postgres=# select * from only part2_1;
 a |   list   
---+----------
 2 | whine
   | cocktail
   | rum
(3 rows)

I couldn’t find anything in the documentation about that so I did send a mail to the general mailing list and here is the answer: “The calculated hash value for the null value will be zero, therefore, it will fall to the partition having remainder zero.”

 

Cet article PGDay Amsterdam – follow up 2 – Where do null values go to in a hash partitioned table? est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator