Tom Kyte
Find objects used in a SQL
We have a UI based application where users come in and setup SQL's to get excel based reports back, there are multiple options to chose on the schedule of the SQL execution and other related parameters. It's a free text box, the expectation is that the users will test their SQL in the database before they setup the SQL in this tool. One of our current requirement is to identify dependency of objects in the SQL.
if this is the SQL:
select
a.col1, b.col2, c.col3
from
table_a a,
view b,
pipe_line_function c
where a.col1 = b.col2
and b.col2 = c.col3;
As a list of dependent objects, the requirement is to get:
TABLEA
VIEW and the objects within the VIEW until we drill down to the base tables or the most granular.
pipe_line_function and the objects within until we drill down to the base tables or the most granular.
Is this possible using any new SQL functions, dependency functions etc., without creating a view of the above SQL setup by the users. We are aware about DBA_DEPENDENCIES. it is not possible to create a view and then grab the dependencies and hence this ticket.
Categories: DBA Blogs
Shrinking High Water Mark
I have noticed that the HWM will only go down if the table is truncated. If I do the following will it lower the HWM:
CREATE TABLE temp AS SELECT * FROM table_name;
TRUNCATE table_name;
INSERT INTO table_name SELECT * FROM temp;
COMMIT;
This has been successful at times in lowering the HWM and other times not. I am wondering why the inconsistency?
Is there a better way?
I am measuring the used blocks with the following:
select count(distinct dbms_rowid.rowid_block_number(rowid) || dbms_rowid.rowid_relative_fno(rowid)) "Used"
from table_name;
Thanks
Categories: DBA Blogs
Error in job of type: executable (window ) (express edition)
I HAVE "Windows 11 Home Single Language". DEVICE NAME: Laptopnum02. NO PASSWORD FOR MY ACCOUNT.
1.CREATE A TEST *.CMD FILE . CREATE A *.TXT THEN CHANGED THE EXTENSION TO: *.CMD THIS MAKES THE *.CMD CALLED: creararchivo.cmd
<code>@echo off
echo Este es el contenido del archivo creado por demo.cmd. > archivo_creado.txt
echo Segunda linea en el archivo creado. >> archivo_creado.txt
echo Tercera linea en el archivo creado. >> archivo_creado.txt</code>
I DISABLED MY ANTIVIRUS (AVAST)
CREATE CREDENTIALS IN sqldeveloper
<code>BEGIN
DBMS_SCHEDULER.CREATE_CREDENTIAL (
credential_name => 'CREDENCIAL_PRUEBA',
username => 'TONYROJAS',
password => 'ABC',
windows_domain => 'SYSTEM'
);
END;
CREATE JOB
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'PRUEBA',
job_type => 'EXECUTABLE',
job_action => 'C:\WINDOWS\SYSTEM32\CMD.EXE',
number_of_arguments => 2,
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=DAILY; BYHOUR=23',
enabled => FALSE,
auto_drop => FALSE,
comments => 'Job para realizar backup diario'
);
-- EJECUTAR COMANDOS DENTRO DEL SIMBOLO DEL SISTEMA
DBMS_SCHEDULER.set_job_argument_value ('PRUEBA', 1, '/C');
DBMS_SCHEDULER.set_job_argument_value ('PRUEBA', 2, 'C:\BackupOracle\creararchivo.cmd');
** DBMS_SCHEDULER.set_attribute('PRUEBA', 'credential_name', 'CREDENCIAL_PRUEBA');**
DBMS_SCHEDULER.enable('PRUEBA'); `
END;
EXECUTE JOB BEGIN DBMS_SCHEDULER.run_job('PRUEBA'); END;
ERROR:
Bug Report -
ORA-27369: job of type EXECUTABLE failed with exit code: 7 Invalid username or password
ORA-06512: in "SYS.DBMS_ISCHED", line 241
ORA-06512: in "SYS.DBMS_SCHEDULER", line 566
Also, when I delete the JOB credential: DELETE THIS PART: DBMS_SCHEDULER.set_attribute('PRUEBA', 'credential_name', 'CREDENCIAL_PRUEBA');</code>
And run it again, I get the following error:
<code>Bug Report -
ORA-27369: job of type EXECUTABLE failed with exit code: 1 Access Denied.
Access denied.
Access denied.</code>
I would like to understand both errors and how to solve it please.
Categories: DBA Blogs
file transfer
Hi Tom,
I am getting following error when i use copy_file procedure of dbms_file_transfer package. Here i am trying to copy log file from one folder to other.
Thanks
SQL> BEGIN
2 dbms_file_transfer.copy_file(source_directory_object =>
3 'SOURCE_DIR', source_file_name => 'sqlnet.log',
4 destination_directory_object => 'DEST_DIR',
5 destination_file_name => 'sqlnet.log');
6 END;
7 /
BEGIN
*
ERROR at line 1:
ORA-19505: failed to identify file "c:\temp\source\sqlnet.log"
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch (OS 3223)
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 84
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 193
ORA-06512: at line 2
Categories: DBA Blogs
Plan problem using composite domain index on CLOB and XMLTYPE
HI,
I need to perform fulltext searches in an xmltype XML field. The table in which this field is located has other fields that can be used to narrow the search in the fulltext index (for example, a date field). However, I noticed that by creating a composite domain index on the xmltype field, the optimizer ignores the possible filter on the data field, and performs a subsequent filter, once the records with the context index have been identified.
This only happens if the field where the context index is created is of type xmltype. I did a test by creating the same table, changing only the field in clob, and the plan seems correct to me, i.e. the records are searched for using only the context index, not doing a subsequent filter.
It can also be seen from the trace that in the second case, with the same data, the cost of the query is significantly lower.
the two table, one wih xmltype field, the other with clob field.
<code>
create table test_xml
(
id number,
dt date,
xmltext xmltype
);
create table test_xml_clob
(
id number,
dt date,
xmltext clob
);
</code>
Script to populate some data
<code>
declare
cnt number := 0;
begin
for x in 1..100 loop
for m in 1..12 loop
for d in 1..28 loop
for y in 2022..2024 loop
insert into test_xml values(cnt, to_date(to_char(m) || '/' || to_char(d) || '/' || to_char(y), 'mm/dd/yyyy'), '<?xml version="1.0" encoding="UTF-8"?><root><item>str1</item></root>');
insert into test_xml_clob values(cnt, to_date(to_char(m) || '/' || to_char(d) || '/' || to_char(y), 'mm/dd/yyyy'), '<?xml version="1.0" encoding="UTF-8"?><root><item>str1</item></root>');
cnt := cnt+1;
end loop;
end loop;
end loop;
commit;
end loop;
end;
</code>
The two indexes, both with the <b>filter by</b> option
<code>
CREATE INDEX ndx_cdi_xmlft_test ON test_xml (xmltext)
INDEXTYPE IS CTXSYS.CONTEXT
FILTER BY dt PARAMETERS ('FILTER CTXSYS.NULL_FILTER');
CREATE INDEX ndx_cdi_xmlclobft_test ON test_xml_clob (xmltext)
INDEXTYPE IS CTXSYS.CONTEXT
FILTER BY dt PARAMETERS ('FILTER CTXSYS.NULL_FILTER');
</code>
Statistics:
<code>
exec dbms_stats.gather_table_stats('', tabname => 'TEST_XML_CLOB', cascade => TRUE);
exec dbms_stats.gather_table_stats('', tabname => 'TEST_XML', cascade => TRUE);
</code>
The queries used for testing. The first one has a much higher cost:
<code>
select * from test_xml where contains(xmltext, 'str1') > 0 and dt between to_date('01/01/2022', 'dd/mm/yyyy') and to_date('02/01/2022', 'dd/mm/yyyy') ;
select * from test_xml_clob where contains(xmltext, 'str1') > 0 and dt between to_date('01/01/2022', 'dd/mm/yyyy') and to_date('02/01/2022', 'dd/mm/yyyy') ;
</code>
<code>select banner_full from v$version</code>
BANNER_FULL ...
Categories: DBA Blogs
Factors Affecting Rollback Speed
Hey Tom,
1. I would like to know the factors influencing the rollback speed of a transaction. And why does it generally take more than the time it took for the actual transaction. Could you show some light on this?
2. I read one of your answers where you have said
"A rollback is a logical operation, if you just did 1,000,000 row inserts and roll them back - we now have 1,000,000 deletes to perform (including the index maintenance and everything)
rollback insert = do a logical delete
rollback delete = do a logical insert
rollback update = do a logical update to put it back the way it was"
How does the objects lock work here is it the same mechanism how the actual DML was done, or it will be different?
Categories: DBA Blogs
CRUD APIs in Oracle Database
Hi,
What are the best practices to develop CRUD/DML APIs (using packages or subprograms) in Oracle. If i want to insert, read, update or delete data, i want to do it through APIs. Furthermore, what can be the possible cons of using APIs instead of using normal queries in web applications?
Thank you.
Categories: DBA Blogs
SqlCLI Apex Export not included Application Audit Informations
Hi!
We have a lot of Apex apps, so we check the LAST_UPDATED_ON value in both the production and dev apps before deploying to production.
Like this:
<code>
select
d.APPLICATION_ID
, d.APPLICATION_NAME
, d.ALIAS
, d.OWNER DEV_OWNER
, p.OWNER PERP01_OWNER
, d.LAST_UPDATED_ON DEV
, p.LAST_UPDATED_ON PERP01
, case
when d.LAST_UPDATED_ON < p.LAST_UPDATED_ON then 'PROD'
when d.LAST_UPDATED_ON > p.LAST_UPDATED_ON then 'DEV'
else null
end LAST
from apex_240100.APEX_APPLICATIONS@DEV d
left outer join apex_240100.APEX_APPLICATIONS@PROD p on
p.workspace = d.workspace
and p.application_id = d.application_id
where d.WORKSPACE = 'WS'
order by d.LAST_UPDATED_ON desc;
</code>
After the 24.1 installation, in the case of applications exported with SQLCLI, if the application is imported into the production environment, the values ??of LAST_UPDATED_ON and LAST_UPDATED_BY will be empty.
Any suggestions?
Categories: DBA Blogs
how to know if a PDF is digitally signed
Hello. I have an application in which users upload PDFs to a table. How can I check in PL/SQL if a PDF in a blob is digitally signed?
Ideally I'd like to know if the signature is valid, but just knowing if a file is not signed would be enough to reject it and save manual work.
Thanks
Categories: DBA Blogs
PARTITION RANGE ITERATOR
hello ,
i have a table TDRCORE.FCM_CCBF ,
PARTITION BY RANGE (RQO_PROC_UTC_DATETIME TIMESTAMP(6))
INTERVAL( NUMTODSINTERVAL(7, 'DAY'))
tehre is an SQL running on LIVE db that comtains many unions but i will not post full script , i will post just 1 part of the union (all the other selects in this union are exatly the same , just different tables)
/* Formatted on 03-Jul-24 10:36:19 (QP5 v5.294) */
<code>
ALTER SESSION SET CURRENT_SCHEMA=TDRCORE;
EXPLAIN PLAN
FOR
WITH currTime
AS (SELECT TO_TIMESTAMP (
TO_CHAR (SYS_EXTRACT_UTC (CURRENT_TIMESTAMP),
'YYYY-MM-DD HH24:MI:SS'),
'YYYY-MM-DD HH24:MI:SS')
AS endts
FROM DUAL)
(SELECT 'FCM_CCBF' AS DBTABLE,
t26.CMX_TRAN_ID AS CMX_TRAN_ID,
t26.CMX_CREATE_TIMESTAMP,
currTime.endts,
CASE
WHEN ( ( COALESCE (t26.WAX_USER_FRAUD_A_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_B_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_C_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_D_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_H_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_M_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_O_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_P_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_R_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_T_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_W_IND, '0')
|| COALESCE (t26.WAX_USER_FRAUD_X_IND, '0'))) > 0
OR ( t26.WAX_SCORE_1_TAG IN ('1', '3')
OR t26.WAX_SCORE_2_TAG IN ('1', '3')
OR t26.WAX_SCORE_3_TAG IN ('1', '3')
OR t26.WAX_SCORE_4_TAG IN ('1', '3'))
THEN
1
ELSE
0
END
AS FRAUD_IND,
TRIM (t26.RRR_MODEL_ID_1) AS "RRR_MODEL_ID_1",
TRIM (t26.RRR_MODEL_ID_2) AS "RRR_MODEL_ID_2",
TRIM (t26.RRR_MODEL_ID_3) AS "RRR_MODEL_ID_3",
TRIM (t26.RRR_MODEL_ID_4) AS "RRR_MODEL_ID_4",
TRIM (t26.RRR_MODEL_SCORE_1) AS "RRR_MODEL_SCORE_1",
TRIM (t26.RRR_MODEL_SCORE_2) AS "RRR_MODEL_SCORE_2",
TRIM (t26.RRR_MODEL_SCORE_3) AS "RRR_MODEL_SCORE_3",
TRIM (t26.RRR_MODEL_SCORE_4) AS "RRR_MODEL_SCORE_4",
TRIM (t26.RRR_MODEL_VERSION_1) AS "RRR_MODEL_VERSION_1",
TRIM (t26.RRR_MODEL_VERSION_2) AS "RR...
Categories: DBA Blogs
shared_pool_size and the streams_pool_size parameters
Can the shared_pool_size and the streams_pool_size parameters be set to the same value?
Both parameters are currently set to zero. I would like to set them both to 2g. I will need to use about 1.5g of the streams_pool memory for Golden Gate. Will this work?
Thanks much for your help.
Categories: DBA Blogs
UNDO tablespace cannot support transaction
Hi,
Due to my work environment, I'm not allowed to cut and paste from my environment.
As a result of that I cannot provide a specific test case, but just gives symptoms, would you give me general guidance on how to approach this problem?
I'm running a huge insert 10878201 rows, and each record is quite fat, it has about 50 or 60 columns. The query was issued from Toad and the connection handle was lost because Toad froze and then when restarted the same connection handle was lost. However, I did a DBMS_SESSION trace enable on the sid, serial#, and got a lot of UNDO segment info in the trace file and then finally it says in the trace:
Problem: Undo tablespace cannot support required UNDO retention
Recommendation: Size UNDO tablespace to 3702 MB
However, there are 11 datafiles (32G * 11) in the UNDO and AFAIK my (Insert into SELECT * from huge table) was the only Active USER transaction that day (4 days back). Somehow the tablespace allocated could have easily supported the UNDO_RETENTION (28800).
REDO Logs: There are 4 log groups (2 per thread), each is 200MB, this particular transaction is running on thread 1 of RAC node1.
1) Shall I try to split the transactions into manageable size chunks and commit every 10000 rows or so?
2) Will resizing the REDO log size help?
Adding significant info to this question:
I get a "ORA-30036" in the trace file generated about 4days back when Toad froze.
However, the session still shows up as ACTIVE in gv$session and last_call_et keeps on inreasing.
I hope this added info will help
Thanks
Categories: DBA Blogs
Loading External Data, Flat file, into an Oracle Table.
Hi Tom,
I need to come up with an automated method to insert data (xml_type) from an external flat file into to a table in Oracle Database using SQL Developer.
Read some articles which shows how to load the files by using a control file, but I couldn't find any article about how to automate the process.
Your help is greatly appreciated.
Thanks,
Chat
Categories: DBA Blogs
Using pragma inline to affect every invocation of the specified subprogram
I understand how to use the online pragma before the invocation of a subprogram that should be inline, but based on the Oracle documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/plsql-optimization-and-tuning.html#GUID-2E78813E-CF29-409D-9F8B-AA24B294BFA2) it should also be possible to use pragma inline to affect every invocation of the specified subprogram:
<i>When the INLINE pragma immediately precedes a declaration, it affects: Every invocation of the specified subprogram in that declaration</i>
Unfortunately when trying to do so it seems as if this would not work as expected when reading the documentation:
<code>
ALTER SESSION SET PLSQL_WARNINGS = 'ENABLE:ALL';
CREATE OR REPLACE PACKAGE foo IS
PROCEDURE bar;
END foo;
/
CREATE OR REPLACE PACKAGE BODY foo IS
PRAGMA INLINE (sub, 'YES');
PROCEDURE sub;
PROCEDURE bar IS
BEGIN
dbms_output.put_line('bar');
sub;
END bar;
PROCEDURE sub IS
BEGIN
dbms_output.put_line('sub');
END sub;
END foo;
/
SELECT * FROM user_errors;
</code>
PLW-05011: pragma INLINE for procedure 'SUB' does not apply to any calls
Categories: DBA Blogs
unable to successfully created Rest Enabled Sql
1) I am using oracle xe 21c database, apex 24.1 and deployed on tomcat 9.0.90 on my labtop
2) I try to connect my office server using Rest enabled sql service. Where I rest enabled entire schema
3) On my labtop when I try to connect it giving me error : The URL of this remote server does not point to a REST Enabled SQL service. Please check the details of your REST Enabled SQL service definition.
4) When I write same url in browser and ending with table name it showing the data.
5) Before 24.1 menas in 18.x to 20.x its working.
6) For rest enabled sql require same version of ords both side? Please guide because both side apex version and ords version are different (my labtop and my office server I am try using remotely )
7) At my office I rest enabled schema from sql workshop->restfull service also run the following scripts
BEGIN
ORDS.enable_schema(
p_enabled => TRUE,
p_schema => 'SVM',
p_url_mapping_type => 'BASE_PATH',
p_url_mapping_pattern => 'hr',
p_auto_rest_auth => FALSE
);
COMMIT;
END;
/
PLEASE REFERE THE ATTACHED VIDEO
https://drive.google.com/file/d/1VONL-ngqD53xjnxKLWSzob63qtD-jSdy/view?usp=sharing
same case appear on apex.oracle.com
workspace :[redacted]
username:[redacted]
password :[redacted]
Application id :[redacted]
Rest enabled SQL service name : svm
Categories: DBA Blogs
Need a sql to find break hours hours between two dates and between 10PM to next day 6AM. 30mins break for every 4 hours
Could you please kindly check and advise on the following. i need to find break hours between two dates. and also i need to find if this break mins is within night hours (10PM to 6AM next day).
ex:
employee work schedule 11-MAY-2012 16:00:00 and 12-MAY-2012 09:00:00.
Break hour that fall between 10PM to 6 AM is as follows
8:00PM to 8:30PM - 1st Break
12:30 Am to 1:00 AM - 2nd Break
5:00AM to 05:30 AM - 3rd break
two of the breaks are between 10PM and 6AM next day hence my night hours break time is 60mins.
i am using following, but is there any better solution
<code> (SELECT COUNT(1)
FROM (SELECT TO_CHAR ((:SHIFT_START + ((val*4.5)/24)), 'YYYY-MM-DD HH24:MI:SS') TIME ,VAL
FROM (select to_number(val,0) VAL
from (select distinct regexp_substr('1,2,3,4,5','[^,]+',1,level) val
from dual
connect by level <=regexp_count('1,2,3,4,5',',')+1
order by val)
WHERE ROWNUM <=
( FLOOR (
( 24
* ( TO_DATE (
TO_CHAR (:SHIFT_END, 'YYYY-MM-DD hh24:mi'),
'YYYY-MM-DD hh24:mi')
- TO_DATE (
TO_CHAR (:SHIFT_START,
'YYYY-MM-DD hh24:mi'),
'YYYY-MM-DD hh24:mi')))
/ 4))
)
) DT
WHERE TO_DATE (DT.TIME,'YYYY-MM-DD HH24:MI:SS')
BETWEEN TO_DATE(TRUNC(:SHIFT_START),'YYYY-MM-DD HH24:MI:SS')+22/24 AND
TO_DATE(TRUNC(:SHIFT_START),'YYYY-MM-DD HH24:MI:SS')+24/24+6/24
)*0.5
ELSE 0 END NIGHT_BREAK_HOURS</code>
Appreciated your kind help in check and advise on this
Categories: DBA Blogs
LOB caching check
Hi,
In order to improve reading performance of a LOB column in my table, I enable CACHE option for it.
But when I check in the buffer cache using gv$bh (I did some SELECT queries on it so it can be cached), I cannot find any entries for my cached LOB.
<code>SELECT *
FROM gv$bh
WHERE lobid =
(SELECT object_id
FROM dba_lobs lob, dba_objects obj
WHERE lob.table_name = 'MY_TABLE' -- it contains only one lob column, so the returned result is unique
AND lob.SEGMENT_NAME = obj.object_name
AND lob.owner = obj.owner);</code>
Is this the way to go? or I m missing something?
Thanks.
Categories: DBA Blogs
not able to export AUDSYS.AUD$UNIFIED:SYS_P23021 -- same table/partition was exported w/o issues yesterday by job using full=yes
our Daily job expdp has full=yes and worked OK till last night.
last night the expdp log showed:
ORA-31693: Table data object
"AUDSYS"."AUD$UNIFIED":"SYS_P23021" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-08103: object no longer exists
Attempt to manually backup the same using same user created to export full database - fails with:
expdp $pcsbackup cluster=NO parallel=1 job_name=expdp_tbl DUMPFILE=expdp_gecdwp_full_2024_0618_222801_failedobjs_%U.dmp LOGFILE=expdp_gecdwp_full_2024_0618_222801_failedobjs.log directory=dir_dbexp exclude=statistics encryption=data_only ENCRYPTION_PASSWORD=***** ENCRYPTION_MODE=DUAL COMPRESSION=DATA_ONLY full=no tables=AUDSYS.AUD\$UNIFIED\:SYS_P23021
Export: Release 19.0.0.0.0 - Production on Wed Jun 19 09:03:27 2024
Version 19.20.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Starting "PCSBACKUP"."EXPDP_TBL": /********@gecdwp_pcsbackup cluster=NO parallel=1 job_name=expdp_tbl DUMPFILE=expdp_gecdwp_full_2024_0618_222801_failedobjs_%U.dmp LOGFILE=expdp_gecdwp_full_2024_0618_222801_failedobjs.log directory=dir_dbexp exclude=statistics encryption=data_only ENCRYPTION_PASSWORD=******** ENCRYPTION_MODE=DUAL COMPRESSION=DATA_ONLY full=no tables=AUDSYS.AUD$UNIFIED:SYS_P23021
ORA-39166: Object AUDSYS.AUD$UNIFIED was not found or could not be exported or imported.
ORA-31655: no data or metadata objects selected for job
Job "PCSBACKUP"."EXPDP_TBL" completed with 2 error(s) at Wed Jun 19 09:04:08 2024 elapsed 0 00:00:39
tried different combos for table name:
AUDSYS.AUD\$UNIFIED:SYS_P23021 -- AND ABOVE - same error
expdp $pcsbackup cluster=NO parallel=1 job_name=expdp_tbl DUMPFILE=expdp_gecdwp_full_2024_0618_222801_failedobjs_%U.dmp LOGFILE=expdp_gecdwp_full_2024_0618_222801_failedobjs.log directory=dir_dbexp exclude=statistics encryption=data_only ENCRYPTION_PASSWORD=***** ENCRYPTION_MODE=DUAL COMPRESSION=DATA_ONLY full=no tables=AUDSYS.AUD$UNIFIED:SYS_P23021
Export: Release 19.0.0.0.0 - Production on Wed Jun 19 09:00:37 2024
Version 19.20.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Starting "PCSBACKUP"."EXPDP_TBL": /********@gecdwp_pcsbackup cluster=NO parallel=1 job_name=expdp_tbl DUMPFILE=expdp_gecdwp_full_2024_0618_222801_failedobjs_%U.dmp LOGFILE=expdp_gecdwp_full_2024_0618_222801_failedobjs.log directory=dir_dbexp exclude=statistics encryption=data_only ENCRYPTION_PASSWORD=******** ENCRYPTION_MODE=DUAL COMPRESSION=DATA_ONLY full=no tables=AUDSYS.AUD:SYS_P23021
ORA-39166: Object AUDSYS.AUD was not found or could not be exported or imported.
ORA-31655: no data or metadata objects selected for job
Job "PCSBACKUP"."EXPDP_TBL" completed with 2 error(s) ...
Categories: DBA Blogs
Export backup fails with ORA-39127: unexpected error from call to TAG: SCHEDULER Calling: SYS.DBMS_SCHED_CLASS_EXPORT.GRANT_EXP obj:
we see this error in the daily full=Y backup :
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
ORA-39127: unexpected error from call to TAG: SCHEDULER Calling: SYS.DBMS_SCHED_CLASS_EXPORT.GRANT_EXP obj: SYS.IDX_RB$CLASS_54724 - SCHEDULER CLASS
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_SCHED_MAIN_EXPORT", line 2601
ORA-06512: at "SYS.DBMS_SCHED_CLASS_EXPORT", line 41
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_SCHED_MAIN_EXPORT", line 2601
ORA-06512: at "SYS.DBMS_SCHED_CLASS_EXPORT", line 41
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 11144
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
But, when we checked ( as sysdba) - this object dosn't exist in the container that it runs:
SYS.IDX_RB$CLASS_54724
no changes were made to the database/ shell script that runs the export datapump backup.
we had seen the same issue on 10th june 2024, but without any changes the next days export backups ran good.
yesterday - again the same error is seen.
not sure what could be causing it - here is the export command that we use for daily backup:
( the $vars are location/filenames generated in script)
expdp $pcsbackup job_name=${jobname}\
directory=DIR_DBEXP\
dumpfile=${dmpfile_name}\
logfile=${logfile_name}\
full=y\
cluster=no \
parallel=4\
exclude=STATISTICS\
exclude=SCHEMA:\" IN \(SELECT USERNAME FROM DBATOOL.EXPDP_EXCLUDE_DWP_BACKUP_TABLES\)\"\
ENCRYPTION=DATA_ONLY \
ENCRYPTION_PASSWORD=<OurProdEncPassowrd> \
ENCRYPTION_MODE=DUAL \
COMPRESSION=DATA_ONLY
Categories: DBA Blogs
APEX AI Assistance ? Does it comes out of the Box ? or We need to Pay to AI Provider like ChatGPT ?
Hi There,
all the recent news about APEX AI Assistance for code generation ? doe sit comes out of the box ? or we need to configure and pay to AI/LLM provider like ChatGPT ?
I am referring to your recent video Build AI-Powered Enterprise Apps Faster with Oracle APEX (https://www.youtube.com/watch?v=qZD8wtn7qoI )
Regards,
Dr. Gyana
Categories: DBA Blogs