Feed aggregator

Oracle database backup

Tom Kyte - 1 hour 13 min ago
Hi Developers, I am using Oracle 10g. I need to take backup of my database. I can take a back-up of tables, triggers etc using sql developers' Database Backup option but there are multiple users created in that database. Can you please support ...
Categories: DBA Blogs

How do you purge stdout files generated by DBMS_SCHEDULER jobs?

Tom Kyte - 1 hour 13 min ago
When running scheduler jobs, logging is provided in USER_SCHEDULER_JOB_LOG and USER_SCHEDULER_JOB_RUN_DETAILS. And stdout is provided in $ORACLE_HOME/scheduler/log. The database log tables are purged either by default 30 days (log_history attribute)....
Categories: DBA Blogs

V$SQL history

Tom Kyte - 1 hour 13 min ago
How many records/entry are there in v$sql,v$ession. and how they flush like Weekly or Space pressure. Thanks
Categories: DBA Blogs

Dynamic SQL in regular SQL queries

Tom Kyte - 1 hour 13 min ago
Hi, pardon me for asking this question (I know I can do this with the help of a PL/SQL function) but would like to ask just in case. I'm wondering if this doable in regular SQL statement without using a function? I'm trying to see if I can write a ...
Categories: DBA Blogs

Adding hash partitions and spreading data across

Tom Kyte - 1 hour 13 min ago
Hi, I have a table with a certain number of range partitions and for each partitions I have eight hash subpartitions. Is there a way to increase the subpartitions number to ten and distributing evenly the number of rows? I have tried "alter tabl...
Categories: DBA Blogs

Bug when using 1 > 0 at "case when" clause

Tom Kyte - 1 hour 13 min ago
Hello, guys! Recently, I've found a peculiar situation when building a SQL query. The purpose was add a "where" clause using a "case" statement that was intented to verify if determined condition was greater than zero. I've reproduced using a "wit...
Categories: DBA Blogs

Difference between explain and execute plan and actual execute plan

Tom Kyte - 1 hour 13 min ago
Hi, I have often got questions around explain plan and execute plan. As per my knowledge, explain plan gives you the execute plan of the query. But I have also read that Execute plan is the plan which Oracle Optimizer intends to use for the query and...
Categories: DBA Blogs

Oracle Data Cloud Launches Data Marketing Program to Help Savvy Auto Dealer Agencies Better Use Digital Data

Oracle Press Releases - 4 hours 3 min ago
Press Release
Oracle Data Cloud Launches Data Marketing Program to Help Savvy Auto Dealer Agencies Better Use Digital Data Nine Leading Retail Automotive Marketing Agencies Are First to Complete Comprehensive Program, Receive Oracle Data Cloud’s Auto Elite Data Marketer (EDM) Designation

Redwood City, Calif.—Feb 21, 2018

Oracle Data Cloud today launched an advanced data training and marketing program to help savvy auto dealer agencies better use digital data. Oracle also announced the first nine leading Tier 3 auto marketing agencies to qualify for the rigorous program and receive Oracle Data Cloud’s Auto Elite Data Marketer (EDM) designation. Those companies included: C-4 Analytics, Dealer Inspire, Dealers United, Goodway Group, L2TMedia, SocialDealer, Stream Marketing, Team Velocity, and TurnKey Marketing. Oracle’s Auto Elite Data Marketer program will help agencies effectively allocate their marketing resources as advertising budgets shift from offline media to digital platforms.

“As the automotive industry goes through an era of transformational change, dealers are literally where the rubber meets the road, and they need cutting edge marketing tools to help maintain or grow market share,” said Joe Kyriakoza, VP and GM of Automotive for the Oracle Data Cloud. “Tier 3 marketers know that reaching the right audience drives measurable campaign results. By increasing the data skills of our marketing agency partners, Oracle can help them directly impact and improve their clients’ campaign results.”

Oracle Data Cloud’s Auto Elite Data Marketer Program includes:

  1. 1. Education & training - Expert training to the marketing agency and their extended teams on advanced targeting strategies and audience planning techniques

  2. 2. Customized collateral - Co-branded collateral pieces to support client marketing efforts, including summary sheets, decks, activation guides, and other materials.

  3. 3. Co-branding marketing - Co-branded marketing initiatives through thought leadership, speaking opportunities, and co-hosted webinars.

  4. 4. Strategic sales support - Access to Oracle’s specialized Retail Solutions Team and the Oracle Data Hotline to support strategic pitches, events, and RFP inquiries.

“We are proud to have worked with Oracle Data Cloud since the beginning, shaping the program together to drive more business for dealers using audience data,” said Joe Chura, CEO of Dealer Inspire. “Our team is excited to continue this relationship as an Elite Data Marketer, empowering Dealer Inspire clients with the unique advantage of utilizing Oracle data for automotive retail targeting.”

“We are consumed with data that allows for hyper-personalization and better targeting of in-market consumers,” said David Boice, CEO and Chairman of Team Velocity Marketing. “Oracle is a new goldmine of data to drive excellent sales and service campaigns and a perfect complement to our Apollo Technology Platform.”  According to Joe Castle, Founder of SOCIALDEALER, “We are excited to be one of the few Auto Elite Data Marketers which provides us a deeper level of custom audience data access from Oracle. Our companies look forward to working closely to further deliver a superior ROI to all our dealership and OEM relationships.”

Through the Auto Elite Data Marketer program, retail marketers learn how to use Oracle’s expansive selection of automotive audiences, which cover the entire vehicle ownership lifecycle, like in-market car shoppers, existing owners, and individuals needing auto finance, credit assistance, or vehicle service. This comprehensive data set allows clients to precisely target the right prospects for any automotive retail campaign. Oracle has teamed up with industry leading data providers to build the robust dataset, like IHS Markit’s Polk for vehicle ownership and intent data, Edmunds.com for online car shopper data and TransUnion the trusted source for consumer finance audiences.

Oracle Data Cloud plans to expand the Auto Elite Data Marketer program to include additional dealer marketing agencies, as well as working directly with dealers and dealer groups and their media partners to use data effectively for advanced targeting and audience planning efforts. For more information about the Auto Elite Data Marketer program, please contact the Oracle Auto team at dealersolutions@oracle.com.

Oracle Data Cloud

Oracle Data Cloud operates the BlueKai Data Management Platform and the BlueKai Marketplace, the world’s largest audience data marketplace. Leveraging more than $5 trillion in consumer transaction data, more than five billion global IDs and 1,500+ data partners, Oracle Data Cloud connects more than two billion consumers around the world across their devices each month. Oracle Data Cloud is made up of AddThis, BlueKai, Crosswise, Datalogix and Moat.

Oracle Data Cloud helps the world’s leading marketers and publishers deliver better results by reaching the right audiences, measuring the impact of their campaigns and improving their digital strategies. For more information and free data consultation, contact The Data Hotline at www.oracle.com/thedatahotline

Contact Info
Simon Jones
Oracle
+1.650.506.0325
s.jones@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • +1.650.506.0325

Interval Partition Problem

Jonathan Lewis - 14 hours 18 min ago

Assume you’ve got a huge temporary tablespace, there’s plenty of space in your favourite tablespace, you’ve got a very boring, simple table you want to copy and partition, and no-one and nothing is using the system. Would you really expect a (fairly) ordinary “create table t2 as select * from t1” to end with an Oracle error “ORA-1652: unable to extend temp segment by 128 in tablespace TEMP” . That’s the temporary tablespace that’s out of space, not the target tablespace for the copy.

Here’s a sample data set (tested on 11.2.0.4 and 12.1.0.2) to demonstrate the surprise – you’ll need about 900MB of space by the time the entire model has run to completion:

rem
rem     Script:         pt_interval_threat_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2018
rem

column tomorrow new_value m_tomorrow
select to_char(sysdate,'dd-mon-yyyy') tomorrow from dual;

create table t1
as
with g as (
        select rownum id
        from dual
        connect by level <= 2e3
)
select
        rownum id,
        trunc(sysdate) + g2.id  created,
        rpad('x',50)            padding
from
        g g1,
        g g2
where
        rownum  comment to avoid WordPress format mess
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

I’ve created a table of 4 million rows, covering 2,000 dates out into the future starting from sysdate+1 (tomorrow). As you can see there’s nothing in the slightest bit interesting, unusual, or exciting about the data types and content of the table.

I said my “create table as select” was fairly ordinary – but it’s actually a little bit out of the way because it’s going to create a partitioned copy of this table.


execute snap_my_stats.start_snap

create table t2
partition by range(created)
interval(numtodsinterval(7, 'day'))
(
        partition p_start       values less than (to_date('&m_tomorrow','dd-mon-yyyy'))
)
storage(initial 1M)
nologging
as
select
        *
from
        t1
;

set serveroutput on
execute snap_my_stats.end_snap

I’ve created the table as a range-partitioned table with an interval() declared. Conveniently I need only mention the partitioning column by name in the declaration, rather than listing all the columns with their types, and I’ve only specified a single starting partition. Since the interval is 7 days and the data spans 2,000 days I’m going to end up with nearly 290 partitions added.

There’s no guarantee that you will see the ORA-01652 error when you run this test – the data size is rather small and your machine may have sufficient other resources to hide the problem even when you’re looking for it – but the person who reported the problem on the OTN/ODC database forum was copying a table of 2.5 Billion rows using about 200 GB of storage, so size is probably important, hence the 4 million rows as a starting point on my small system.

Of course, hitting an ORA-01652 on TEMP when doing a simple “create as select” is such an unlikely sounding error that you don’t necessarily have to see it actually happen; all you need to see (at least as a starting point in a small model) is TEMP being used unexpectedly so, for my first test (on 11.2.0.4), I’ve included some code to calculate and report changes in the session stats – that’s the calls to the package snap_my_stats. Here are some of the more interesting results:


---------------------------------
Session stats - 20-Feb 16:58:24
Interval:-  14 seconds
---------------------------------
Name                                                                     Value
----                                                                     -----
table scan rows gotten                                               4,000,004
table scan blocks gotten                                                38,741

session pga memory max                                             181,338,112

sorts (rows)                                                         2,238,833

physical reads direct temporary tablespace                              23,313
physical writes direct temporary tablespace                             23,313

The first couple of numbers show the 4,000,000 rows being scanned from 38,741 table blocks – and that’s not a surprise. But for a simple copy the 181MB of PGA memory we’ve acquired is a little surprising, though less so when we see that we’ve sorted 2.2M rows, and then ended up spilling 23,313 blocks to the temporary tablespace. But why are we sorting anything – what are those rows ?

My first thought was that there was a bug in some recursive SQL that was trying to define or identify dynamically created partitions, or maybe something in the space management code trying to find free space, so the obvious step was to enable extended tracing and look for any recursive statements that were running a large number of times or doing a lot of work. There weren’t any – and the trace file (particularly the detailed wait events) suggested the problem really was purely to do with the CTAS itself; so I ran the code again enabling events 10032 and 10033 (the sort traces) and found the following:


---- Sort Statistics ------------------------------
Initial runs                              1
Input records                             2140000
Output records                            2140000
Disk blocks 1st pass                      22292
Total disk blocks used                    22294
Total number of comparisons performed     0
Temp segments allocated                   1
Extents allocated                         175
Uses version 1 sort
Uses asynchronous IO

One single operation had resulted in Oracle sorting 2.14 million rows (but not making any comparisons!) – and the only table in the entire system with enough rows to do that was my source table! Oracle seems to be sorting a large fraction of the data for no obvious reason before inserting it.

  • Why, and why only 2.14M out of 4M ?
  • Does it do the same on 12.1.0.2 (yes), what about 12.2.0.1 (no – hurrah: unless it just needs a larger data set!).
  • Is there any clue about this on MoS (yes Bug 17655392 – though that one is erroneously, I think, flagged as “closed not a bug”)
  • Is there a workaround ? (Yes – I think so).

Playing around and trying to work out what’s happening the obvious pointers are the large memory allocation and the “incomplete” spill to disc – what would happen if I fiddled around with workarea sizing – switching it to manual, say, or setting the pga_aggregate_target to a low value. At one point I got results showing 19M rows (that’s not a typo, it really was close to 5 times the number of rows in the table) sorted with a couple of hundred thousand blocks of TEMP used – the 10033 trace showed 9 consecutive passes (that I can’t explain) as the code executed from which I’ve extract the row counts, temp blocks used, and number of comparisons made:


Input records                             3988000
Total disk blocks used                    41544
Total number of comparisons performed     0

Input records                             3554000
Total disk blocks used                    37023
Total number of comparisons performed     0

Input records                             3120000
Total disk blocks used                    32502
Total number of comparisons performed     0

Input records                             2672000
Total disk blocks used                    27836
Total number of comparisons performed     0

Input records                             2224000
Total disk blocks used                    23169
Total number of comparisons performed     0

Input records                             1762000
Total disk blocks used                    18357
Total number of comparisons performed     0

Input records                             1300000
Total disk blocks used                    13544
Total number of comparisons performed     0

Input records                             838000
Total disk blocks used                    8732
Total number of comparisons performed     0

Input records                             376000
Total disk blocks used                    3919
Total number of comparisons performed     0

There really doesn’t seem to be any good reason why Oracle should do any sorting of the data (and maybe it wasn’t given the total number of comparisons performed in this case) – except, perhaps, to allow it to do bulk inserts into each partition in turn or, possibly, to avoid creating an entire new partition at exactly the moment it finds just the first row that needs to go into a new partition. Thinking along these lines I decided to pre-create all the necessary partitions just in case this made any difference – the code is at the end of the blog note. Another idea was to create the table empty (with, and without, pre-created partitions), then do an “insert /*+ append */” of the data.

Nothing changed (much – though the number of rows sorted kept varying).

And then — it all started working perfectly with virtually no rows reported sorted and no I/O to the temporary tablespace !

Fortunately I thought of looking at v$memory_resize_ops and found that the automatic memory management had switched a lot of memory to the PGA, allowing Oracle to do whatever it needed to do completely in memory without reporting any sorting. A quick re-start of the instance fixed that “workaround”.

Still struggling with finding a reasonable workaround I decided to see if the same anomaly would appear if the table were range partitioned but didn’t have an interval clause. This meant I had to precreate all the necessary partitions, of course – which I did by starting with an interval partitioned table, letting Oracle figure out which partitions to create, then disabling the interval feature – again, see the code at the end of this note.

The results: no rows sorted on the insert, no writes to temp. Unless it’s just a question of needing even more data to reproduce the problem with simple range partitioned tables, it looks as if there’s a problem somewhere in the code for interval partitioned tables and all you have to do to work around it is precreate loads of partitions, disable intervals, load, then re-enable the intervals.

Footnote:

Here’s the “quick and dirty” code I used to generate the t2 table with precreated partitions:


create table t2
partition by range(created)
interval(numtodsinterval(7, 'day'))
(
        partition p_start values less than (to_date('&m_tomorrow','dd-mon-yyyy'))
)
storage(initial 1M)
nologging
monitoring
as
select
        *
from
        t1
where
        rownum <= 0
;


declare
        m_max_date      date;
begin
        select  max(created)
        into    expand.m_max_date
        from    t1
        ;

        
        for i in 1..expand.m_max_date - trunc(sysdate) loop
                dbms_output.put(
                        to_char(trunc(sysdate) + loop.i,'dd-mon-yyyy') || chr(9)
                );
                execute immediate
                        'lock table t2 partition for ('''  ||
                        to_char(trunc(sysdate) + loop.i,'dd-mon-yyyy') ||
                        ''') in exclusive mode'
                ;
        end loop;
        dbms_output.new_line();
end;
/

prompt  ========================
prompt  How to disable intervals
prompt  ========================

alter table t2 set interval();

The code causes partitions to be created by locking the relevant partition for each date between the minimum and maximum in the t1 table; locking the partition is enough to create it if it doesn’t already exists. The code is a little wasteful since it locks each partition 7 times as we walk through the dates – but it’s only a quick demo for a model, and for copying a very large table wastage would probably be very small compared to the work of doing the actual data copy. Obviously one could be more sophisticated and limit the code to locking and creating only the partitions needed, and only locking them once each.

 

ODA X7-2S/M 12.2.1.2.0: update-repository fails after re-image

Yann Neuhaus - 16 hours 5 min ago

While playing with a brand new ODA X7-2M, I faced a strange behaviour after re-imaging the ODA with the latest version 12.2.1.2.0. Basically after re-imaging and doing the configure-firstnet the next step is to import the GI clone in the repository before creating the appliance. Unfortunately this command fails with an error DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070. Why not having a look on how to fix it…

First of all doing a re-image is really straight forward and work very well. I simply access to the ILOM remote console to attach the ISO file for the ODA, in this case the patch 23530609 from the MOS, and restart the box on the CDROM. After approx. 40 minutes you have a brand new ODA running the latest release.

Of course instead re-imaging, I could “simply” update/upgrade the DCS agent to the latest version. Let say that I like to start from a “clean” situation when deploying a new environment and patching a not installed system sound a bit strange for me ;-)

So once re-imaged the ODA is ready for deployment. The first step is to configure the network that I can SSH to it and go ahead with the create appliance. This takes only 2 minutes using the command configure-firstnet.

The last requirement before running the appliance creation is to import the GI Clone, here the patch p27119393_122120, in the repository. Unfortunately that’s exactly where the problem starts…

Screen Shot 2018-02-19 at 12.11.23

Hmmm… I can’t get it in the repository due to a strange hand shake error. So I will check if the web interface is working at least (…of course using Chrome…)

Screen Shot 2018-02-19 at 12.11.14

Same thing here, it is not possible to come in the web interface at all.

While searching a bit for this error, we finally landed in the Know Issue chapter of the ODA 12.2.1.2.0 Release Note, which sounds promising. Unfortunately none of the listed error did really match to our case. However doing a small search in the page for the error message pointed us the following case out:

Screen Shot 2018-02-19 at 12.12.28

Ok the error is ODA X7-2HA related, but let’s give a try.

Restart-DCS

Once DCS is restarted, just re-try the update-repository

Import-GIClone

Here we go! The job has been submitted and the GI clone is imported in the repository :-)

After that the CREATE APPLIANCE will run like a charm.

Hope it helped!

 

 

Cet article ODA X7-2S/M 12.2.1.2.0: update-repository fails after re-image est apparu en premier sur Blog dbi services.

Strange dependency in user_dependency: view depends on unreferenced function

Tom Kyte - Tue, 2018-02-20 21:26
Dear Team, I will try to simplify the scenario we have, using a simple test case: <code> SQL> create table test_20 ( a number) 2 / Table created. SQL> SQL> create or replace function test_function (p_1 in number) 2 return num...
Categories: DBA Blogs

Report for employee attendance

Tom Kyte - Tue, 2018-02-20 21:26
I am sorry for asking this seemingly trivial question, but I have been struggling with it for some time, my deadline is approaching and I can't find any answers for it. I have 3 tables: Calendar table: <code>CREATE TABLE "CJ_CAL" ( "CAL_ID...
Categories: DBA Blogs

Using SELECT * combined with WITH-CLAUSE - Bad Practice? View gets compiled with static columns list

Tom Kyte - Tue, 2018-02-20 21:26
Hey guys, I have a question regarding clean SQL Code / Bad Practice around the use of wildcards in SELECT-Statements. In the provided example I have a base-query with a huge list of columns selected and two (or more) sources I need to have combin...
Categories: DBA Blogs

Podcast: DevOps in the Real World: Culture, Tools, Adoption

OTN TechBlog - Tue, 2018-02-20 17:38

Among technology trends DevOps is certainly generating its share of heat. But is that heat actually driving adoption? “I’m going to give the answer everyone hates: It depends,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC. “It depends on where each team is, on where the organization is. I talk to people all over the industry, and I work with organizations all over the industry, and everyone is at a very different place.”

Some of the organizations Nicole has spoken with are pushing the DevOps envelope. “They’re almost squeezing blood out of a stone, finding ways to optimize things that have been optimized at the very edge. They’re doing things that most people can’t even comprehend.” Other organizations aren't feeling it. "There’s no DevOps,” says Nicole. “DevOps is nowhere near on their radar.”

Some organizations that had figured out DevOps stumbled a bit when the word came down to move everything to the cloud, explains Shay Shmeltzer, product management director for Oracle Cloud Development tools. “A lot of them need to rethink how they’re doing stuff, because cloud actually simplifies DevOps to some degree. It makes the provisioning of environments and getting stuff up and down much easier and quicker in many cases.”

As Nicole explains, “DevOps is a technology transformation methodology that makes your move into the cloud much more sticky, much more successful, much more effective and efficient to deliver value, to realize cost-savings. You can get so much more out of the technology that you are using and leveraging, so that when you do move to the cloud, everything is so much better. It’s almost a chicken and egg thing. You need so much of it together.”

However, that value isn’t always apparent to everyone. Kelly Shortridge, product manager at SecurityScorecard, observes that some security stakeholders, “feel they don’t have a place in the DevOps movement.” Some security teams have a sense that configuration management will suffice. “Then they realize that they can’t just port existing security solutions or existing security methodologies directly into agile development processes,” explains Kelly. “You have the opportunity to start influencing change earlier in the cycle, which I think was the hype. Now we’re at the Trough of Disillusionment, where people are discovering that it’s actually very hard to integrate properly, and you can’t just rely on technology for this shift. There also has to be a cultural shift, as far as security, and how they think about their interactions with engineers.” In that context Kelly sees security teams wrestling with how to interact within the organization.

But the value of DevOps is not lost on other roles and disciplines. It depends on how you slice it, explains Leonid Igolnik, member and angel investor with Sand Hill Angels, and founding investor, advisor, and managing partner with Batchery. He observes that DevOps progress varies across different industry subsets and different disciplines, “whether it’s testing, development, or security.”

“Overall, I think we’re reaching the Slope of Enlightenment, and some of those slices are reaching the Plateau of Productivity,” Leonid says.

Alena Prokharchyk began her journey into DevOps three years ago when she started her job as principal software engineer at Rancher Labs, whose principal product targets DevOps. “That actually forced me to look deeper into DevOps culture,” she says. “Before that I didn’t realize that such problems existed to this extent. That helped me understand certain aspects of the problem. Within the company, the key for me was communication with the DevOps team. Because if I’m going to develop something for DevOps, I have to understand the problems.”

If you’re after a better understanding of challenges and opportunities DevOps represents, you’ll want to check out this podcast, featuring more insight on adoption, cultural change, tools and other DevOps aspects from this collection of experts.

The Panelists

(Listed alphabetically)

Nicole Forsgren Nicole Forsgren
Founder and CEO, DevOps Research and Assessment LLC
Twitter LinkedIn Leonid Igolnik
Member and Angel Investor, Sand Hill Angels
Founding Investor, Advisor, Managing Partner, Batchery
Twitter LinkedIn Alena Prokharchyk
Principal Software Engineer, Rancher Labs
Twitter LinkedIn Baruch Sadogursky
Developer Advocate, JFrog
Twitter LinkedIn Shay Shmeltzer
Director of Product Management, Oracle Cloud Development Tools
Twitter LinkedIn Kelly Shortridge
Product Manager at SecurityScorecard
Twitter LinkedIn   Additional Resources Coming Soon
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
  • AI Beyond Chatbots
    How is Artificial Intelligence being applied to modern applications? What are the options and capabilities? What patterns are emerging in the application of AI? A panel of experts provides the answers to these and other questions.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Oracle Database 18c released! Image-based Installation?

Dietrich Schroff - Tue, 2018-02-20 14:19
Today i discovered on otn.oracle.com --> database:

The downloads-page looks like the last weeks - 12c is the default...

... but the documentations tab lists 18c:

blogs.oracle.com shows the following:


So the "cloud first" strategy is still in place (by the way: is this something like "america first"?).

The installation procedure looks strange:
Starting with Oracle Database 18c, installation and configuration of Oracle Database software is simplified with image-based installation.
To install Oracle Database, create the new Oracle home, extract the image file into the newly-created Oracle home, and run the setup wizard to register the Oracle Database product.
Using image-based installation, you can install and upgrade Oracle Database for single-instance and cluster configurations.
Oracle shows up this:
 But there is no 18c on OTN for download...



EBS Release 12.2.7 VM Virtual Appliance Now Available

Steven Chan - Tue, 2018-02-20 12:10

[Contributing author: Robert Farrington]

We've just released an Oracle E-Business Suite Release 12.2.7 VM Virtual Appliance on the Oracle Software Delivery Cloud.

Note: The software package shown above includes a README document with useful getting started guidance.

You can use this appliance to create an Oracle E-Business Suite 12.2.7 Vision demonstration instance on a single, unified virtual machine containing both the database tier and the application tier.

Use with Oracle VM Manager or Oracle VM VirtualBox

This virtual appliance can be imported into Oracle VM Manager to deploy an Oracle E-Business Suite Linux 64-bit environment on compatible server-class machines running Oracle VM Server. It can also be imported into Oracle VM VirtualBox to create a virtual machine on a desktop PC or laptop.

Note: This virtual appliance is for on-premises use only. If you're interested in running Oracle E-Business Suite on Oracle Cloud, see Getting Started with Oracle E-Business Suite on Oracle Cloud (My Oracle Support Knowledge Document 2066260.1) for a comprehensive overview of what is available.

EBS Technology Stack Components

The Oracle E-Business Suite 12.2.7 VM virtual appliance delivers the full software stack, including the Oracle Linux 6.9 (64-bit) operating system, Oracle E-Business Suite, and additional required technology components.

The embedded technology components are listed below:

Component Version RDBMS Oracle Home 12.1.0.2 Application Code Level Oracle E-Business Suite 12.2.7 Release Update Pack (see My Oracle Support Knowledge Document 2230783.1) with AD-TXK Delta 10 (see My Oracle Support Knowledge Document 1617461.1) Oracle Forms and Reports 10.1.2.3 WebLogic Server 10.3.6 Web Tier 11.1.1.9 JDK JDK 1.7 build 1.7.0_161-b13 Java Plugin J2SE 1.7 Critical Patch Update (CPU) October 2017

 

REST Services Availability

Oracle E-Business Suite Integrated SOA Gateway (ISG) provides functionality to expose integration interfaces published in the Integration Repository as SOAP and REST based web services. Oracle VM Virtual Appliance for Oracle E-Business Suite Release 12.2.7 is configured for ISG REST Services. PL/SQL APIs, Java Bean Services, Application Module Services, and Concurrent Programs are REST enabled in this VM virtual appliance.

References Related Articles
Categories: APPS Blogs

Upcoming Desktop Integration Webinar - OAUG AppsTech 2018

Steven Chan - Tue, 2018-02-20 11:01

Join us for this upcoming Oracle E-Business Suite webinar and learn about recent enhancements and features in EBS Desktop Integration:

  • Presenter: Senthilkumar Ramalingam, Oracle Applications Technology Development
  • Title: Oracle E-Business Suite Desktop Integration
  • Date and Time: Wednesday, February 21, 2018, 1:00 p.m. EST (6.00 PM GMT)

You can register here for this OAUG AppsTech eLearning Series session.

A complete listing of sessions for the OAUG AppsTech eLearning Series is available on the OAUG website.

References
Categories: APPS Blogs

One command database upgrade on ODA

Yann Neuhaus - Tue, 2018-02-20 07:00

The 12.2 finally arrived on ODA and is now available on all generations. Modern ODAs are now supporting 11.2.0.4, 12.1.0.2 and 12.2.0.1 database engines, and these 3 versions can work together without any problem.

You probably plan to upgrade some old databases to the latest engine, at least those still running on 11.2. As you may know, 11.2  is no more supported with premier support since January 2015: it’s time to think about an upgrade. Note that premier support for 12.1 will end in July 2018. Actually, running 11.2 and 12.1 databases will need extended support this year. And this extended support is not free, as you can imagine. There is still an exception for 11.2.0.4, Oracle is offering extended support to his customers until the end of 2018.

Database upgrades have always been a lot of work, and often paired with a platform change. You need to recreate the databases, the tablespaces, export and import the data with datapump, correct the problems, and so on. Sometimes you can restore the old database to the new server with RMAN, but it’s only possible if the old engine is supported on your brand new server/OS combination.

As ODA is a longer term platform, you can think about ugrading the database directly on the appliance. Few years ago you should have been using dbua or catupgr, but now latest ODA package is including a tool for one command database upgrade. Let’s try it!

odacli, the ODA Client Line Interface, has a new option: upgrade-database. Parameters are very limited:

[root@oda-dbi01 2018-02-19]# odacli upgrade-database -h
Usage: upgrade-database [options]
Options:
--databaseids, -i
Database IDs to be upgraded
Default: []
* --destDbHomeId, -to
DB HOME ID of the destination
--help, -h
get help
--json, -j
json output
--sourceDbHomeId, -from
DB HOME ID of the source

You need to provide the database identifier (ODA stores a repository of all databases, db homes, jobs in a JavaDB/DerbyDB database) and the destination db home identifier you want to upgrade to. The source db home id is optional as Oracle can determine it quite easily. There is no other option (for the moment): no pre-backup (advised) and no storage migration (switch between acfs and ASM) for example.

Imagine you have an 11.2.0.4 database you want to upgrade to 12.2.0.1. Look for the id of your database ODAWS11:

[root@oda-dbi01 2018-02-19]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       11.2.0.4             false      OLTP     odb2     ACFS       Configured   72023166-a39c-4a93-98b7-d552029b2eeaodacli create-dbhome -v 12.1.0.2.171017

Note that this database is configured with acfs, as 11.2 databases cannot be stored directly in an ASM 12c.

You can upgrade this database to an existing db home only: if you want to upgrade it to a new home, just create this new home, for example:

[root@oda-dbi01 2018-02-19]# odacli create-dbhome -v 12.1.0.2.171017

If you want to use an existing home, just pick the db home id, for example here the one used by ODAWS database.

Let’s do the upgrade:

[root@oda-dbi01 2018-02-19]# odacli upgrade-database -i de281792-1904-4536-b42c-8a55df489b73 -to 1ca87df9-4691-47ed-90a9-2a794128539d

{
"jobId" : "782e65fd-8b2b-4d16-a542-1f5b2b78d308",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "February 19, 2018 17:40:58 PM CET",
"resourceList" : [ ],
"description" : "Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]",
"updatedTime" : "February 19, 2018 17:40:58 PM CET"
}

odacli will schedule a job for that, as for other operations. You can follow the job with describe-job:

[root@oda-dbi01 2018-02-19]# odacli describe-job -i 782e65fd-8b2b-4d16-a542-1f5b2b78d308

Job details
----------------------------------------------------------------
ID:  782e65fd-8b2b-4d16-a542-1f5b2b78d308
Description:  Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]
Status:  Running
Created:  February 19, 2018 5:40:58 PM CET
Message:

Task Name                                          Start Time                          End Time                            Status
-------------------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance                         February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Success
Database Upgrade                                   February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Running

You can also look at the database alert.log file during the operation.

Be patient! Database upgrade is taking time, at least 20 minutes for an empty database. And it seems that other jobs planned during the upgrade are in waiting state (like a create-database for example).

[root@oda-dbi01 2018-02-19]# odacli describe-job -i 782e65fd-8b2b-4d16-a542-1f5b2b78d308

Job details
----------------------------------------------------------------
ID:  782e65fd-8b2b-4d16-a542-1f5b2b78d308
Description:  Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]
Status:  Running
Created:  February 19, 2018 5:40:58 PM CET
Message:

Task Name                                          Start Time                          End Time                            Status
-------------------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance                         February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Success
Database Upgrade                                   February 19, 2018 5:40:58 PM CET    February 19, 2018 6:01:37 PM CET    Success

Now the upgrade seems OK, let’s check that:

su - oracle
. oraenv <<< ODAWS11
oracle@oda-dbi01:/home/oracle/ # sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 19 18:01:49 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select instance_name, version from v$instance;

INSTANCE_NAME     VERSION
---------------- -----------------
ODAWS11      12.2.0.1.0

sho parameter spfile

NAME                 TYPE     VALUE
-------------------- -------- ---------------------------------------------------------------
spfile               string   /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/spfileODAWS11.ora

Even the spfile has been moved to new home, quite nice.

Let’s check the repository:

[root@oda-dbi01 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       12.2.0.1             false      OLTP     odb2     ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d

Everything looks fine!

Now let’s test the upgrade with a 12.1 database, ODAWS12. This one is using ASM storage:

[root@oda-dbi01 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       12.2.0.1             false      OLTP     odb2     ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
0276326c-cb6d-4246-9943-8289d29d6a4f     DBTEST2    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   7d2bbaa0-da3c-4455-abee-6bf4ff2d2630
24821a48-7474-4a8b-8f36-afca399b6def     ODAWS12    Si       12.1.0.2             false      OLTP     odb2     ASM        Configured   520167d7-59c8-4732-80a6-cc32ef745cec

[root@oda-dbi01 2018-02-19]# odacli upgrade-database -i 24821a48-7474-4a8b-8f36-afca399b6def -to 1ca87df9-4691-47ed-90a9-2a794128539d
{
"jobId" : "10a2a304-4e8e-4b82-acdc-e4c0aa8b21be",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "February 19, 2018 18:36:17 PM CET",
"resourceList" : [ ],
"description" : "Database service upgrade with db ids: [24821a48-7474-4a8b-8f36-afca399b6def]",
"updatedTime" : "February 19, 2018 18:36:17 PM CET"
}

[root@oda-dbi01 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       12.2.0.1             false      OLTP     odb2     ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
0276326c-cb6d-4246-9943-8289d29d6a4f     DBTEST2    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   7d2bbaa0-da3c-4455-abee-6bf4ff2d2630
24821a48-7474-4a8b-8f36-afca399b6def     ODAWS12    Si       12.1.0.2             false      OLTP     odb2     ASM        Updating     520167d7-59c8-4732-80a6-cc32ef745cec

[root@oda-dbi01 2018-02-19]# odacli describe-job -i 10a2a304-4e8e-4b82-acdc-e4c0aa8b21be

Job details
----------------------------------------------------------------
ID:  10a2a304-4e8e-4b82-acdc-e4c0aa8b21be
Description:  Database service upgrade with db ids: [24821a48-7474-4a8b-8f36-afca399b6def]
Status:  Running
Created:  February 19, 2018 6:36:17 PM CET
Message:

Task Name                                          Start Time                          End Time                            Status
-------------------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance                         February 19, 2018 6:36:17 PM CET    February 19, 2018 6:36:17 PM CET    Success
Database Upgrade                                   February 19, 2018 6:36:17 PM CET    February 19, 2018 6:58:05 PM CET    Success

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       12.2.0.1             false      OLTP     odb2     ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
0276326c-cb6d-4246-9943-8289d29d6a4f     DBTEST2    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   7d2bbaa0-da3c-4455-abee-6bf4ff2d2630
24821a48-7474-4a8b-8f36-afca399b6def     ODAWS12    Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d

su - oracle
. oraenv <<< ODAWS12
oracle@oda-dbi01:/home/oracle/ # sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 19 18:59:08 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select instance_name, version from v$instance;

INSTANCE_NAME     VERSION
---------------- -----------------
ODAWS12      12.2.0.1.0

SQL> sho parameter spfile

NAME               TYPE       VALUE
------------------ ---------- ---------------------------------------------
spfile             string     +DATA/ODAWS12/PARAMETERFILE/spfileodaws12.ora

It also worked fine with an 12.1 database: and it also took about 20 minutes for an empty database.

You may have noticed that it’s possible to upgrade several databases in the same time by providing multiple database id. Not sure if you would do that in real life :-)

upgrade-database is also available on ODA that are still using oakcli (nowadays only virtualized ODA I think), but as oakcli has no repository, database id has to be replaced by database name,  and db home id by the name registered in classic oraInventory, for example:

oakcli upgrade database -db ODAWS11 -to OraDb12201_home1

 

This great feature will not revolutionize your DBA life, but it should help to upgrade your database with minimum effort.

 

Cet article One command database upgrade on ODA est apparu en premier sur Blog dbi services.

Taking Notes – 2

Jonathan Lewis - Tue, 2018-02-20 05:08

[Originally written August 2015, but not previously published]

If I’m taking notes in a presentation that you’re giving there are essentially four possible reasons:

  • You’ve said something interesting that I didn’t know and I’m going to check it and think about the consequences
  • You’ve said something that I knew but you’ve said it in a way that made me think of some possible consequences that I need to check
  • You’ve said something that I think is wrong or out of date and I need to check it
  • You’ve said something that has given me a brilliant idea for solving a problem I’ve had to work around in the past and I need to work out the details

Any which way, if I’m taking notes it means I’ve probably just added a few more hours of work to my todo list.

Footnote

“Checking” can include:

  • having a chat
  • reading the manuals
  • finding a recent Oracle white-paper
  • searching MoS
  • building some models

Philosophy

Jonathan Lewis - Tue, 2018-02-20 05:03

Here’s a note I’ve just re-discovered – at the time I was probably planning to extend it into a longer article but I’ve decided to publish the condensed form straight away.

In a question to the Oak Table a couple of years ago (May 2015) Cary Millsap asked the following:

If you had an opportunity to tell a wide audience of system owners, users, managers, project leaders, system architects, DBAs, and developers “The most important things you should know about Oracle” what would you tell them?

I imagine that since then Cary has probably discussed the pros and cons of some of the resulting thoughts in one of his excellent presentations on how to do the right things, but this was my quick response:

If I had to address them all at once it would be time to go more philosophical than technical.

The single most important point: Oracle is a very large, complex, and flexible product. It doesn’t matter where you are approaching it from you will not have enough information on your own to make best use of it. You have to talk to your peer group to get alternative ideas, and you have to talk to the people at least one step either side of you on the technology chain (dev to dba, dba to sysadmin, Architect to dev, dba to auditor etc.) to understand options and consequences. Create 4 or 5 scenarios of how your system should behave and then get other people – and not just your peer group – to identify their advantages and threats.

Pages

Subscribe to Oracle FAQ aggregator