Feed aggregator

Partioning -- 13a : Relocating a Partition

Hemant K Chitale - Sun, 2018-12-16 05:19
When you want to / need to move a Partition to a different Tablespace (e.g. as part of a LifeCycle Management Policy), you may need downtime to relocate the Partition.  However, version 12cRelease1 allows Online Relocation of a Partition.

Let's say I have a SALES_DATA table and I need to move the Year 2016 data to a tablespace with datafiles on "cheaper" (lesss-performant) storage :

SQL> select partition_name, tablespace_name
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P_2016 TBS_YEAR_2016
P_2017 TBS_YEAR_2017
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_2020 TBS_YEAR_2020
P_MAXVALUE USERS

6 rows selected.

SQL>
SQL> select index_name, tablespace_name, status
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 /

INDEX_NAME TABLESPACE_NAME STATUS
------------------------------ ------------------------------ --------
SYS_C0017514 HEMANT VALID
SALES_DATA_LCL_NDX_1 N/A
SALES_DATA_LCL_NDX_2 N/A

SQL>
SQL> l
1 select index_name, partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name in
4 (select index_name
5 from user_indexes
6 where table_name = 'SALES_DATA')
7* order by index_name, partition_position
SQL> /

INDEX_NAME PARTITION_NA TABLESPACE_NAM STATUS
------------------------------ ------------ -------------- --------
SALES_DATA_LCL_NDX_1 P_2016 TBS_YEAR_2016 USABLE
SALES_DATA_LCL_NDX_1 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_1 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_1 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_1 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_1 P_MAXVALUE USERS USABLE
SALES_DATA_LCL_NDX_2 P_2016 TBS_YEAR_2016 USABLE
SALES_DATA_LCL_NDX_2 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_2 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_2 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_2 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_2 P_MAXVALUE USERS USABLE

12 rows selected.

SQL>


I then want to move the Year 2016 data to the Tablespace ARCHIVE_SALES_DATA :

SQL> alter table SALES_DATA
2 move partition P_2016
3 tablespace ARCHIVE_SALES_DATA
4 /

Table altered.

SQL>
SQL> select partition_name, tablespace_name
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NA TABLESPACE_NAME
------------ ------------------
P_2016 ARCHIVE_SALES_DATA
P_2017 TBS_YEAR_2017
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_2020 TBS_YEAR_2020
P_MAXVALUE USERS

6 rows selected.

SQL>
SQL> select index_name, tablespace_name, partitioned, status
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by index_name
5 /

INDEX_NAME TABLESPACE_NAME PAR STATUS
------------------------------ ------------------ --- --------
SALES_DATA_LCL_NDX_1 YES N/A
SALES_DATA_LCL_NDX_2 YES N/A
SYS_C0017514 HEMANT NO UNUSABLE

SQL> alter index SYS_C0017514 rebuild ;

Index altered.

SQL> select index_name, tablespace_name, partitioned, status
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by index_name
5 /

INDEX_NAME TABLESPACE_NAME PAR STATUS
------------------------------ ------------------ --- --------
SALES_DATA_LCL_NDX_1 YES N/A
SALES_DATA_LCL_NDX_2 YES N/A
SYS_C0017514 HEMANT NO VALID

SQL>
SQL> l
1 select index_name, partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name in
4 (select index_name
5 from user_indexes
6 where table_name = 'SALES_DATA')
7* order by index_name, partition_position
SQL> /

INDEX_NAME PARTITION_NA TABLESPACE_NAME STATUS
------------------------------ ------------ ------------------ --------
SALES_DATA_LCL_NDX_1 P_2016 TBS_YEAR_2016 UNUSABLE
SALES_DATA_LCL_NDX_1 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_1 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_1 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_1 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_1 P_MAXVALUE USERS USABLE
SALES_DATA_LCL_NDX_2 P_2016 TBS_YEAR_2016 UNUSABLE
SALES_DATA_LCL_NDX_2 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_2 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_2 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_2 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_2 P_MAXVALUE USERS USABLE

12 rows selected.

SQL>
SQL> alter index SALES_DATA_LCL_NDX_1
2 rebuild partition P_2016
3 tablespace ARCHIVE_SALES_DATA
4 /

Index altered.

SQL>
SQL> alter index SALES_DATA_LCL_NDX_2
2 rebuild partition P_2016
3 tablespace ARCHIVE_SALES_DATA
4 /

Index altered.

SQL>
SQL> l
1 select index_name, partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name in
4 (select index_name
5 from user_indexes
6 where table_name = 'SALES_DATA')
7* order by index_name, partition_position
SQL> /

INDEX_NAME PARTITION_NA TABLESPACE_NAME STATUS
------------------------------ ------------ ------------------ --------
SALES_DATA_LCL_NDX_1 P_2016 ARCHIVE_SALES_DATA USABLE
SALES_DATA_LCL_NDX_1 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_1 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_1 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_1 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_1 P_MAXVALUE USERS USABLE
SALES_DATA_LCL_NDX_2 P_2016 ARCHIVE_SALES_DATA USABLE
SALES_DATA_LCL_NDX_2 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_2 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_2 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_2 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_2 P_MAXVALUE USERS USABLE

12 rows selected.

SQL>


When I relocated the P_2016 Partition (to the ARCHIVE_SALES_DATA Tablespace), the ROWIDs for rows in that Partition changed.  So the Non-Partitioned Index SYS_C0017514 and the corresponding Local Partitions of the two Partitioned Indexes became "UNUSABLE".  These had to be rebuilt. Alternatively, I could have added the UPDATE INDEXES clause to to the ALTER TABLE ... MOVE PARTITION .. statement to reset the Indexes to Usable but this would not have relocated the Local Partitions for those two Indexes to the new Tablespace.

Note that for Table Partitions, the MOVE clause relocates the Partition but for Index Partition the REBUILD clause is used to relocate (as well as make Usable) the Partition.

I would encourage you to view documentation and examples of the MOVE ONLINE facility in 12c to relocate a Table Partition without downtime.


Categories: DBA Blogs

The job where I learned the most valuable lessons was...

Joel Kallman - Sat, 2018-12-15 14:07
...McDonald's!  Prior to Oracle, I worked for a number of different companies: in IT at a bank and in product development at a couple other large companies.  But the most valuable lessons I ever learned and what has helped me the most in my entire career has been the education I received in in my late teens at McDonald's.

Often times, when people reflect on who shaped them the most in life, it's usually a coach or a high school teacher or a university professor.  In my case, it was Louis Stallman, Jr. - the general manager of a McDonald's franchise in Wadsworth, Ohio, where I worked at in high school and in the summer breaks between my years at university.

I'm second-generation American, the product of Polish and German immigrants.  I'm the youngest of 7 children (my Mom was also also the 7th child in her family).  My parents grew up during the Depression and passed their life lessons onto us - waste nothing, speak when you're spoken to, and always work & do your best.  My Dad was in the US Army Air Corps and flew on B-29s during World War II.  He was the disciplinarian.  At the time, my friends thought that I grew up in an overly strict household, but I can say today with certainty that this disciplined household provided complete structure.  There was a lucid difference between doing something right and doing something wrong.

When I started at McDonald's in high school, I wasn't sure what to expect.  It probably was a bit more rigid than what most kids were used to, but having grown up in a disciplined family, it was pretty easy to adapt to.  I certainly wasn't perfect - I was the typical crass teenager who thought he knew everything.  But it was a very structured environment and there was always a right and a wrong way to do something.

The general manager of this McDonald's was Louis Stallman, Jr.  While he was a kind man, he was also a bit of a disciplinarian.  There was no slouching about.  This is where I learned the infamous phrase "if you have time to lean, you have time to clean".  I made a whopping $3.35 an hour and we were put to task and earned every cent of it.  I worked a number of years there, in the summers, over holidays, in some cases even on holidays.  I had the good fortune to interact with a lot of people, some nice, some not so nice.  Lou Stallman coached everyone who interacted with a customer, and when you would fail or omit something, he was always there to remind you.  What I learned from Lou Stallman and what I've carried forward into my professional life:

  1. Greet the customer with a smile, always.
  2. When talking to a customer, look at them directly in the eye.
  3. Talk clearly and repeat back to the customer what they told you.
  4. Treat the customer (and really everyone) with respect and dignity.
  5. Genuinely thank the customer and wish them farewell.

That's it.  Seems obvious, right?  It is, but I see countless people forego these very basic courtesies when interacting with any customer.

I'll greet anyone at any time.  It's the kind thing to do.  Where I live in the USA (Ohio), it's generally considered rude not to greet someone, to put on a smile and say "good morning" while waiting for an elevator or say "hello" while passing them on the street.  I always enjoy going to Oracle HQ in California and greet everyone I pass on the sidewalk - they look at me like I'm from Mars.

In a disconnected, smartphone crazy-world, I find people are a bit afraid to engage someone and look them in the eye while talking to them or listening to them.  There couldn't be a better way to say "you have my full attention" than looking at them directly, without distractions.

When listening to a customer, pay careful attention to what they're asking, and repeat back to them what you think they said.  Maybe they're complaining and need to vent some steam.  Maybe what they really want isn't accurately expressed in the words they're using, so repeat back to them what you think they want.  That's the perfect time to ensure you have a mutual understanding.  And people like to know that someone has truly listened to them.

The customer (and really everyone) should always, always, always be treated with respect and dignity.  At all times.  The customer may not always be right, but they are deserving of respect and dignity.

Always thank the customer.  Always.  You may have killed yourself for your customer and you might think that they owe you something.  But still thank them.  They're the reason why you're able to feed your family.  Competition is everywhere, and an easy way to stand out is through a personal connection.  Something is wrong if you can't extend a simple courtesy to your customer, thanking them for their business.

There you have it - the simple lessons I learned from McDonald's general manager Louis Stallman, Jr.  They're not technical and they're not difficult to grasp.  But they have broad applicability to almost any job in any field anywhere in the world.  They have served me well and I encourage you to try them yourself.

Off Canvas Menu in Oracle VBCS/JET Cloud

Andrejus Baranovski - Sat, 2018-12-15 13:59
These days I'm actively working with VBCS/JET Cloud product from Oracle. The more I work with VBCS the more I like it. VBCS follows similar declarative development concepts as Oracle ADF, this makes it easy to get up to speed with VBCS development. VBCS with declarative JavaScript development approach brings unique solution for JavaScript systems implementation for enterprise.

I will share sample with off canvas menu implementation for VBCS app. Sample is based on step by step guide shared by Shay Shmeltzer. I don't describe steps how to build off canvas in VBCS from scratch, you should watch Shay's video for the instructions.

Off canvas menu rendered in VBCS app:


You should check how to build multiple flows in VBCS app in my previous post - Flow Navigation Menu Control in Oracle VBCS. I have defined three flows in my sample, this means there will be three menu items:


To render menu in off canvas block, I'm using JET navigation list component:


Sample app code which can be imported into your VBCS instance is available on GitHub.

[FREE Masterclass] How To Become Oracle Certified Cloud Architect [1Z0-932] in 8 Weeks

Online Apps DBA - Sat, 2018-12-15 04:16

Oracle Get2 Cloud (OCI) is must know for everyone [Apps DBA, DBA Architects, SOA Admins, Architects, FMW Admins & Security Professionals] because all Oracle software runs on top of Oracle Cloud Infrastructure (OCI) i.e. Oracle’s Gen2 Cloud! Learning & Being Certified On Cloud will take your career to next level in 2019 Visit: HERE to Join […]

The post [FREE Masterclass] How To Become Oracle Certified Cloud Architect [1Z0-932] in 8 Weeks appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Extreme Nulls

Jonathan Lewis - Fri, 2018-12-14 13:01

This note is a variant of a note that I wrote a few months ago about the impact of nulls on column groups. The effect showed up recently on a client site with a little camouflage that confused the issue for a little while, so I thought it would be worth a repeat.  We’ll start with a script to generate some test date:

rem
rem     Script:         pt_hash_cbo_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:        
rem
rem     Last tested 
rem             12.1.0.2
rem

create table t1 (
        hash_col,
        rare_col,
        n1,
        padding
)
nologging
partition by hash (hash_col)
partitions 32
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        mod(rownum,128),
        case when mod(rownum,1021) = 0 
                then rownum + trunc(dbms_random.value(-256, 256))
        end case,
        rownum,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1048576 -- > comment to avoid WordPress format issue
;

create index t1_i1 on t1(hash_col, rare_col) nologging
local compress 1
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                granularity => 'ALL',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve got a hash-partitioned table with 32 partitions; the partitioning key is called hash_col, and there is another column called rare_col that is almost alway null – roughly 1 row in every 1,000 holds a value. I’ve added a local index on (hash_col, rare_col) compressing the leading column since hash_col is very repetitive, and gathered stats on the partitions and table. Here’s a view of the data for a single value of hash_col, and a summary report of the whole data set:

select  
        hash_col, rare_col, count(*)
from
        t1
where
        hash_col = 63
group by
        hash_col, rare_col
order by
        hash_col, rare_col
;

  HASH_COL   RARE_COL   COUNT(*)
---------- ---------- ----------
        63     109217          1
        63     240051          1
        63     370542          1
        63     501488          1
        63     631861          1
        63     762876          1
        63     893249          1
        63    1023869          1
        63                  8184

9 rows selected.

select
        count(*), ct
from    (
        select
                hash_col, rare_col, count(*) ct
        from
                t1
        group by
                hash_col, rare_col
        order by
                hash_col, rare_col
        )
group by ct
order by count(*)
;

  COUNT(*)         CT
---------- ----------
         3       8183
       125       8184
      1027          1

Given the way I’ve generated the data any one value for hash_col will have there are 8,184 (or 8,183) rows where the rare_col is null; but there are 1027 rows which have a value for both hash_col and rare_col with just one row for each combination.

Now we get to the problem. Whenever rare_col is non null the combination of hash_col and rare_col is unique (though this wasn’t quite the cased at the client site) so when we query for a given hash_col and rare_col we would hope that the optimizer would be able to estimate a cardinality of one row; but this is what we see:


variable n1 number
variable n2 number

explain plan for
select /*+ index(t1) */
        n1
from
        t1
where
        hash_col = :n1
and     rare_col = :n2
;

select * from table(dbms_xplan.display);

========================================

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |   908 | 10896 |    76   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |   908 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

The optimizer has predicted a massive 908 rows. A quick check of the object stats shows us that this is “number of rows in table” / “number of distinct keys in index” (1,048,576 / 1,155, rounded up).

Any row with rare_col set to null cannot match the predicate “rare_col = :n2”, but because the optimizer is looking at the statistics of complete index entries (and there are 1048576 of them, with 1155 distinct combinations, and none that are completely null) it has lost sight of the frequency of nulls for rare_col on its own. (The same problem appears with column groups – which is what I commented on in my previous post on this topic).

I’ve often said in the past that you shouldn’t create histograms on data unless your code is going to use them. In this case I need to stop the optimizer from looking at the index.distinct_keys and one way to do that is to create a histogram on one of the columns that defines the index; and I’ve chosen to do this with a fairly arbitrary size of 10 buckets:


execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns rare_col size 10')

explain plan for
select /*+ index(t1) */
        n1
from
        t1
where
        hash_col = :n1
and     rare_col = :n2
;

select * from table(dbms_xplan.display);

========================================

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |     1 |    12 |     2   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |     1 |    12 |     2   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |     1 |    12 |     2   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |     1 |       |     1   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

Bonus observation

This problem came to my attention (and I’ve used a partitioned table in my demonstration) because I had noticed an obvious optimizer error in the client’s execution plan for exactly this simple a query. I can demonstrate the effect the client saw by running the test again without creating the histogram but declaring hash_col to be not null. Immediately after creating the index I’m going to add the line:


alter table t1 modify hash_col not null;

(The client’s system didn’t declare the column not null, but their equivalent of hash_col was part of the primary key of the table which meant it was implicitly declared not null). Here’s what my execution plan looked like with this constraint in place:


--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |   908 | 10896 |    76   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |    28 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

Spot the difference.

The estimate of index rowids is far smaller than the estimate of the rows that will be fetched using those rowids. This is clearly an error.

If you’re wondering how Oracle got this number divide 908 by 32 (the number of partitions in the table) – the answer is 28.375.

Fortunately it’s (probably) an error that doesn’t matter despite looking worryingly wrong. Critically the division hasn’t changed the estimate of the number of table rows (we’ll ignore the fact that that estimate is wrong thanks to a different error), and the cost of the index range scan and table access have not changed. The error is purely cosmetic in effect.

Interestingly if you modify the query to be index-only (i.e. you restrict the select list to columns in the index) this extra division disappears.

Summary

1) If you have a B-tree index where one (or more) of the columns is null for a large fraction of the entries then the optimizer may over-estimate the cardinality of a predicate of the form: “(list of all index columns) = (list of values)” as it will be using the index.distinct_keys in its calculations and ignore the effects of nulls in the individual columns. If you need to work around this issue then creating a histogram on one of the index columns will be sufficient to switch Oracle back to the strategy of multiplying the individual column selectivities.

2) There are cases of plans for accessing partitioned tables where Oracle starts by using table-level statistics to get a suitable set of estimates but then displays a plan with the estimate of rows for an index range scan scaled down by the number of partitions in the table. This results in a visible inconsistency between the index estimate and the table estimate, but it doesn’t affect the cardinality estimate for the table access or either of the associated costs – so it probably doesn’t have a destabilising effect on the plan.

Lightning Web Components - the dawn of (another) new era

Robert Baillie - Fri, 2018-12-14 12:18
Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’. The reasons behind that, and the main principles behind its design are covered very nicely in this article on developer.salesforce.com. From that we can then get to a series of examples here. (Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above) Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression? This is an article on early impressions, based on reviewing and playing with the examples - I fully expect there to be misunderstandings, bad terminology, and mistakes in here - If you're OK with that, I'm OK with that. I admit, I got excited and wanted to post something as quickly as possible before my cynical side took over. So...

Lightning Web Components - the dawn of (another) new era

Rob Baillie - Fri, 2018-12-14 08:04

Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’.

The reasons behind that, and the main principles behind its design are covered very nicely in this article on developer.salesforce.com.

From that we can then get to a series of examples here.

(Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above)

Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression?

This is an article on early impressions, based on reviewing and playing with the examples - I fully expect there to be misunderstandings, bad terminology, and mistakes in here - If you're OK with that, I'm OK with that. I admit, I got excited and wanted to post something as quickly as possible before my cynical side took over. So here it is - mistakes and all.

WOW. Salesforce UI development has grown up.

Salesforce aren’t lying when they’ve said that they’re trying to bring the development toolset up to the modern standards.

We get imports, what look like annotations and decorators, and there’s even mention of Promises. Maybe there’s some legs in this…

It’s easy to dismiss this as ‘Oh no, yet another change’, but the thing is - the rest of industry develops and improves its toolset - why shouldn’t Salesforce?

The only way to keep the product on point IS to develop the frameworks, replace the technology, upgrade, move on. If you don’t do that then the whole Salesforce Ecosystem starts to stagnate.

Or to put it another way - in every other part of the developer community, learning from what was built yesterday and evolving is seen as a necessity. It’s good to see Salesforce trying to keep up.

So what are the big things that I’ve spotted immediately?

import is supported, and that makes things clearer

Import is a massive addition to Javascript that natively allows us to define the relationships between javascript files within javascript, rather than at the HTML level.

Essentially, this replaces the use of most ‘script’ tags in traditional Javascript development.

For Lightning Web Components,we use this to bring in capabilities from the framework, as well as static resources.

E.g. Importing modules from the Lightning Web Components framework:


import { LightningElement, track } from 'lwc';

Importing from Static Resources:


import { loadScript } from 'lightning/platformResourceLoader’;
import chartjs from '@salesforce/resourceUrl/chart';

What this has allowed Salesforce to do is to split up the framework into smaller components. If you don’t need to access Apex from your web component, then you don’t need to import the part of the framework that enables that capability.

This *should* make individual components much more lightweight and targeted - only including the capabilities that are required, when they are required.

Getting data on screen is simpler

Any javascript property is visible to the HTML template.

E.g.


export default class WebAppComponentByMe extends LightningElement {
contacts;

We can then render this property in the HTML with {contacts} (none of those attributes to define and none of those pesky v dot things to forget).

Much neater, much more concise.

We track properties

Looking at the examples, my assumption was that if we want to perform actions when a property is changed, we mark the property trackable using the @track decorator.

For example:


export default class WebAppComponentByMe extends LightningElement {
@track contacts;

I was thinking that, at this point, anything that references this property (on page, or in Javascript) will be notified whenever that property changes.

However, at this point I can't really tell what the difference is between tracked and non-tracked properties - a mystery for another day

Wiring up to Apex is much simpler

One of the big criticisms of Lightning Components that I always had was the amount of code you need to write in order to call an Apex method. OK, so you have force:recordData for a lot of situations, but there are many times when only an Apex method will do.

In Web Components, this is much simpler.

In order to connect to Apex, we import the ‘wire’ module, and then import functions into our javascript


import { LightningElement, wire } from 'lwc';
import getContactList from '@salesforce/apex/ContactController.getContactList';

The first line imports the wire capabilities from the framework, the second then imports the Apex method as a javascript method, therefore making it available to the component.

We can then connect a javascript property up to the method using the wire decorator:


@wire(getContactList) contacts;

Or wire up a javascript method:


@wire(getContactList)
wiredContacts({ error, data }) {
if (data) {
this.contacts = data;
} else if (error) {
this.error = error;
}
}

When the component is initialised, the getContactList method will be executed.

If the method has parameters, that’s also very simple (E.g. wiring to a property):


@wire(getContactList, { searchKey: '$searchKey' })
contacts;

Changing the value of a property causes Apex to re-execute

Having wired up a property as a parameter to an Apex bound Javascript function, any changes to that property will cause the function to be re-executed

For example, if we:


searchKey = '';

@wire(findContacts, { searchKey: '$searchKey' })
contacts;

Whenever the searchKey property changes, the Apex method imported as ‘findContacts’ will be executed and the contacts property is updated.

Thankfully, we can control when that property changes, as it looks like changing the property in the UI does not automatically fire a change the property on the Javascript object. In order to do that, we need to change the property directly.

E.g. Let’s say we extend the previous example and there’s an input that is bound to the property, and there’s an onchange event defined:



And the handler does the following:


handleKeyChange(event) {
this.searchKey = event.target.value;
}

This will cause the findContacts method to fire whenever the value in the input is changed.

Note that it is the assignment to this.searchKey that causes the event to fire - it looks like the binding from the HTML is 1-way. I admit that I need to investigate this further.

Events do not require configuration to be implemented

Events work in a completely different way - but then that’s not a problem - Application and Component events were different enough to cause headaches previously. The model is actually much simpler.

The example in the above referenced repository to look at is ‘PubSub’.

It’s much too involved to into detail here, but the result is that you need to:

  • Implement a Component that acts as the messenger (implementing registerListener, unregisterListener and fireEvent)
  • Any component that wants to fire an event, or listen for an event will import that component to do so, firing events or registering listeners.

This would seem to imply that (at least a certain amount of) state within components is shared - looking like those defined with 'const'

Whatever the precise nature of the implementation, a pure Javascript solution is surely one that anyone involved in OO development will welcome.

I suspect that, in a later release, this will become a standard component.

Summary

Some people will be thinking "Man, glad I didn’t migrate from Classic / Visualforce to Lightning Experience / Components - maybe I’ll just wait a little longer for it all to settle down”.

You’re wrong - it won’t settle, it’ll continue to evolve and the technologies will be continually replaced by new ones. Eventually, the jump from what you have to where you need to get to will be so huge that you’ll find it incredibly hard. There’s a reason why Salesforce pushes out 3 releases a year, whether you want it or not, these technology jumps are just the same. The more you put it off, the more painful it’ll be.

The change from Lightning Components to Lightning Web Components is vast - a lot more than a single 3 letter word would have you suspect. The only real similarities between the two frameworks that I’ve seen up to now are:

  • Curlies are used to bind things
  • The Base Lightning Components are the same
  • You need to know Javascript

Other than that, they’re a world apart.

Also, I couldn’t find any real documentation - only examples - although those examples are a pretty comprehensive starting point.

Now, obviously it's early days - we're in pre-release right now, but what I've seen gives me great hope for the framework, it's a significant step forward and I can't wait to see what happens next. I wonder if a Unit Testing framework might follow (I can but hope)

You could wait, but hey, really, what are you waiting for? Come on, jump in. The change is exciting...

Oracle Database 18c XE available for Everyone

Oracle Database Express Edition (XE) Release 18.4.0.0.0 (18c) was released on October 19, 2018 and is now available for DBAs, developers, data scientists or anyone curious about the worlds #1...

We share our skills to maximize your revenue!
Categories: DBA Blogs

[BLOG] Templates in Oracle SOA Suite 12c

Online Apps DBA - Fri, 2018-12-14 06:35

Are you learning SOA Dev and want to enhance your skills on the same? If yes, then visit: https://k21academy.com/soadev23 and learn about: ✔What are Templates in SOA 12C ✔The 3 levels of templates ✔Steps to create different levels of templates & much more… Are you learning SOA Dev and want to enhance your skills on […]

The post [BLOG] Templates in Oracle SOA Suite 12c appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Solved] Oracle GoldenGate Extract/Replicat Abended- OGG-02091: Operation not supported

Online Apps DBA - Fri, 2018-12-14 03:02

Are you learning Oracle GoldenGate and are facing issues such as ‘Extract or Replicat processes getting abended’? If you are facing the above issue and want to know all about it, then visit https://k21academy.com/goldengate32 where we have covered: ✔’Extract or Replicat processes getting abended’ issue in detail ✔The Reason why it occurs ✔The Solution to […]

The post [Solved] Oracle GoldenGate Extract/Replicat Abended- OGG-02091: Operation not supported appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Packages are invalid -- ORA-04061

Tom Kyte - Fri, 2018-12-14 02:46
Hi, In our DB, few packages became invalid. And when we verified it we saw that there are no errors related to them. I was expecting sessions to run this package without any error, however when it was executed for first time, we got ORA-04061 erro...
Categories: DBA Blogs

Full-Text Index (Table, Index and LOB files) Size Creep

Tom Kyte - Fri, 2018-12-14 02:46
Q: How do we "manage" the size of a table's associated SYS_LOB files? Background: I have a table [SRCH_CACHE] that was setup as a lookup table, because the two base tables could not be joined/filtered with sufficient speed. The table is pretty ...
Categories: DBA Blogs

How to forecast archive generation for a 350 GB table with no large objects.

Tom Kyte - Fri, 2018-12-14 02:46
Hi Gurus, Thanks for helping us whenever we need you. I have a task to import a 350 GB table. I will create index later. Its a 12.2 CDB with one single PDB and have a standby DB with force_logging enabled. Is there a method so that I can f...
Categories: DBA Blogs

How to compare date duration of first row with second and so on.. and convert it to rows to column with indicators

Tom Kyte - Fri, 2018-12-14 02:46
Hi Tom, I have below table <code>CREATE TABLE PRDCT (ID NUMBER(10,0), PRDCT_CD VARCHAR2(1), START_DT DATE, END_DT DATE);</code> With data in this like - <code>Insert Into PRDCT Values (1,'A',to_date('01-May-2017'),to_date('31-Jul-2017')...
Categories: DBA Blogs

Red Hat Enterprise Linux 8 – Application Streams

Yann Neuhaus - Thu, 2018-12-13 09:05

You may have heard that : Red Hat Enterprise Linux 8 is available for downloading in beta version since few weeks…
You want to download it ? Click here.
RH_EnterpriseLinux8beta_stacked_RGB_BlueA significant change coming with this new version is the way the applications packages are provided. As you know, up to RHEL7 packages were downloaded via repositories listed in .repo files located by default under /etc/yum.repos.d/. This is still the same with RHEL8, but two new major repositories are available in the default redhat.repo files.

In order to get access to them we must of course register my system to a Red Hat Subscription…
[root@rhel8beta1 ~]# subscription-manager register --username xxx.yyy@zzz.com
Registering to: subscription.rhsm.redhat.com:443/subscription
Password:
The system has been registered with ID: e42829a5-8a8e-42d3-a69a-07a1499e9b0e
The registered system name is: rhel8beta1
[root@rhel8beta1 ~]#

…and attach it to a Pool (here the Pool will be chosen automatically) :
[root@rhel8beta1 ~]# subscription-manager attach --auto
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux for x86_64 Beta
Status: Subscribed
[root@rhel8beta1 ~]#

As the /etc/yum.repos.d/redhat.repo is now available, let’s check which repository does it contain :
[root@rhel8beta1 ~]# grep -B1 name /etc/yum.repos.d/redhat.repo
[rhel-8-for-x86_64-rt-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (Debug RPMs)
--
[rhel-8-for-x86_64-rt-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (Source RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-source-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (Source RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-debug-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (Debug RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-source-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (Source RPMs)
--
[rhel-8-for-x86_64-appstream-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (Source RPMs)
--
[rhel-8-for-x86_64-nfv-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (Source RPMs)
--
[rhel-8-for-x86_64-nfv-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (Source RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (Source RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (Source RPMs)
--
[rhel-8-for-x86_64-appstream-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
--
[rhel-8-for-x86_64-rt-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (RPMs)
--
[rhel-8-for-x86_64-appstream-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-debug-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (Debug RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
--
[rhel-8-for-x86_64-nfv-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (Source RPMs)
[root@rhel8beta1 ~]#

AppStream ? BaseOS ? What’s that… This is what we will discover in this blog.

First of all we can see that both are enabled by default :
[root@rhel8beta1 ~]# subscription-manager repos --list-enabled
+----------------------------------------------------------+
Available Repositories in /etc/yum.repos.d/redhat.repo
+----------------------------------------------------------+
Repo ID: rhel-8-for-x86_64-baseos-beta-rpms
Repo Name: Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel8/8/x86_64/baseos/os
Enabled: 1


Repo ID: rhel-8-for-x86_64-appstream-beta-rpms
Repo Name: Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel8/8/x86_64/appstream/os
Enabled: 1
[root@rhel8beta1 ~]#

BaseOS

Content in BaseOS is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations“.
This is how Red Hat define it. Over a thousand of packages are available from the BaseOS repository :
[root@rhel8beta1 ~]# yum --disablerepo "*" --enablerepo "rhel-8-for-x86_64-baseos-beta-rpms" list available | wc -l
1145
[root@rhel8beta1 ~]#

You can get the full list here.
Basically, those packages are system-related and are mainly used to manage and configure the OS and services (such as NetworkManager, Chrony, Dracut, aso…). In other words most of them are intended for use by system administrators. So nothing very new here except for the fact that they are all grouped in a unique dedicated repository.

AppStream

The second repository contains much more packages (full list here) :
[root@rhel8beta1 /]# yum --disablerepo "*" --enablerepo "rhel-8-for-x86_64-appstream-beta-rpms" list available | wc -l
4318
[root@rhel8beta1 /]#

Application Stream provides additional user space applications, runtime languages and databases. It replaces the “extra” Repos and the Software Collection. All the content in AppStream is available in two formats : the well known RPM format and a brand new one called “module” which an extension to the RPM format.
A module is a set of RPM packages that are linked together. For exemple, if you want to check which packages are concerned by the Postgresql module, you must use the new “yum module” command :
[root@rhel8beta1 /]# yum module list postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:47:42 ago on Mon Nov 26 15:13:03 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
postgresql 10 [d] client, default [d] postgresql module
postgresql 9.6 client, default [d] postgresql module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 /]#

By the way, YUM is no longer the default Packages Manager with RHEL8. The command is still available but it’s actually an alias of the DNF tool (coming from Fedora) :
[root@rhel8beta1 ~]# which yum
/usr/bin/yum
[root@rhel8beta1 ~]# ll /usr/bin/yum
lrwxrwxrwx. 1 root root 5 Oct 15 10:25 /usr/bin/yum -> dnf-3
[root@rhel8beta1 ~]#

If you want to have a look to the main usage differences between YUM and DNF, check that :
[root@rhel8beta1 /]# man yum2dnf
YUM2DNF(8)


NAME
yum2dnf - Changes in DNF compared to YUM
[...] [...]

Let’s go back to our modules. Here is how you can check the packages contained in a module :
[root@rhel8beta1 /]# yum module info postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs) 2.9 kB/s | 4.1 kB 00:01
Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs) 2.9 kB/s | 4.1 kB 00:01
Name : postgresql
Stream : 10 [d] Version : 20180813131250
Context : 9edba152
Profiles : client, default [d] Default profiles : default
Repo : rhel-8-for-x86_64-appstream-beta-rpms
Summary : postgresql module
Description : This postgresql module has been generated.
Artifacts : postgresql-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-contrib-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-docs-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-plperl-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-plpython3-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-pltcl-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-server-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-server-devel-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-static-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-test-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-test-rpm-macros-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-upgrade-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-upgrade-devel-0:10.5-1.el8+1546+27ad5f8e.x86_64


Name : postgresql
Stream : 9.6
Version : 20180813131400
Context : 9edba152
Profiles : client, default [d] Default profiles : default
Repo : rhel-8-for-x86_64-appstream-beta-rpms
Summary : postgresql module
Description : This postgresql module has been generated.
Artifacts : postgresql-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-contrib-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-docs-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-plperl-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-plpython3-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-pltcl-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-server-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-server-devel-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-static-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-test-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-test-rpm-macros-0:9.6.10-1.el8+1547+210b7007.x86_64


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 /]#

The above output shows that a module can contain several streams. Each stream represent a different version of the application.
Moreover, a module can have couple of profiles. A profile is a set of certain RPM packages selected to be installed together for a particular use-case (server, client, development, minimal install, aso…).

To install an application from the default stream and with the default profile, add the ‘@’ character before the application name :
[root@rhel8beta1 /]# yum install @postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:25:55 ago on Mon Nov 26 16:27:13 2018.
Dependencies resolved.
=======================================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================================
Installing group/module packages:
postgresql-server x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 5.1 M
Installing dependencies:
libpq x86_64 10.5-1.el8 rhel-8-for-x86_64-appstream-beta-rpms 188 k
postgresql x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 1.5 M
Installing module profiles:
postgresql/default
Enabling module streams:
postgresql 10


Transaction Summary
=======================================================================================================================================================================================
Install 3 Packages


Total download size: 6.7 M
Installed size: 27 M
Is this ok [y/N]: y
[...] [...] [root@rhel8beta1 /]#

You can also use the “yum module install postgresql” command.

Quick check :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 10.5
[root@rhel8beta1 ~]#

And if you want to install Postgres from an oldest stream and with another profile (here Postgres 9.6 client only) :
[root@rhel8beta1 /]# yum install @postgresql:9.6/client
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:33:45 ago on Mon Nov 26 16:27:13 2018.
Dependencies resolved.
=======================================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================================
Installing group/module packages:
postgresql x86_64 9.6.10-1.el8+1547+210b7007 rhel-8-for-x86_64-appstream-beta-rpms 1.4 M
Installing dependencies:
libpq x86_64 10.5-1.el8 rhel-8-for-x86_64-appstream-beta-rpms 188 k
Installing module profiles:
postgresql/client
Enabling module streams:
postgresql 9.6


Transaction Summary
=======================================================================================================================================================================================
Install 2 Packages


Total download size: 1.6 M
Installed size: 5.8 M
Is this ok [y/N]: y
[...] [...] [root@rhel8beta1 /]#

Check :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 9.6.10
[root@rhel8beta1 ~]#


[root@rhel8beta1 ~]# yum module list --enabled
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:04:43 ago on Thu Dec 13 08:34:05 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
container-tools 1.0 [d][e] default [d] Common tools and dependencies for container runtimes
postgresql 9.6 [e] client [i], default [d] [i] postgresql module
satellite-5-client 1.0 [d][e] gui, default [d] Red Hat Satellite 5 client packages
virt rhel [d][e] default [d] Virtualization module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 ~]#

Hummm… Only the module 9.6 is enabled ? Let’s try to enable the version 10 :
[root@rhel8beta1 ~]# yum module enable postgresql:10
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:08:06 ago on Thu Dec 13 07:52:30 2018.
Dependencies resolved.
======================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================
Switching module streams:
postgresql 9.6 -> 10


Transaction Summary
======================================================================================================================================


Is this ok [y/N]: y
Complete!


Switching module streams does not alter installed packages (see 'module enable' in dnf(8) for details)
[root@rhel8beta1 ~]#

It’s better now :
[root@rhel8beta1 ~]# yum module list --enabled
Failed to set locale, defaulting to C
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:13:22 ago on Thu Dec 13 08:34:05 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
container-tools 1.0 [d][e] default [d] Common tools and dependencies for container runtimes
postgresql 10 [d][e] client [i], default [d] [i] postgresql module
satellite-5-client 1.0 [d][e] gui, default [d] Red Hat Satellite 5 client packages
virt rhel [d][e] default [d] Virtualization module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 ~]#

But…
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 9.6.10
[root@rhel8beta1 ~]#

…still using 9.6 :-(
After switching from one module to another, we must upgrade the corresponding packages :
[root@rhel8beta1 ~]# yum distro-sync
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:18:40 ago on Thu Dec 13 08:34:05 2018.
Dependencies resolved.
=========================================================§=====================================================================================
Package Arch Version Repository Size
==============================================================================================================================================
Upgrading:
postgresql x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 1.5 M
postgresql-server x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 5.1 M


Transaction Summary
==============================================================================================================================================
Upgrade 2 Packages


Total download size: 6.5 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64.rpm 1.3 MB/s | 5.1 MB 00:03
(2/2): postgresql-10.5-1.el8+1546+27ad5f8e.x86_64.rpm 371 kB/s | 1.5 MB 00:04
--------------------------------------------------------------------------------------------------------------- --------------------------------
Total 1.6 MB/s | 6.5 MB 00:04
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/1
Upgrade: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64
Upgrading : postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/4
Upgrade: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64
Upgrade: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64
Running scriptlet: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Upgrading : postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Running scriptlet: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Upgrade: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64
Upgraded: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Cleanup : postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Upgraded: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Upgraded: postgresql-9.6.10-1.el8+1547+210b7007.x86_64
Cleanup : postgresql-9.6.10-1.el8+1547+210b7007.x86_64 4/4
Upgraded: postgresql-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-9.6.10-1.el8+1547+210b7007.x86_64 4/4
Verifying : postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/4
Verifying : postgresql-9.6.10-1.el8+1547+210b7007.x86_64 2/4
Verifying : postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 3/4
Verifying : postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 4/4


Upgraded:
postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64


Complete!
[root@rhel8beta1 ~]#

And now it’s fine :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 10.5
[root@rhel8beta1 ~]#

So what ?

That was only a first quick try with the AppStream fonctionality. What we can remember here is that with this new way to manage packages we can benefit from parallel availability of multiple versions of software. This is due to the disassociation from the kernel space (BaseOS) – which is still managed in a traditional way, and the user space (AppStream) – which is now deployed in the form of “containerized” applications.
Up to now, when we wanted to upgrade an application to a given version, we had to think about the inter-dependency between this application and the other one that we didn’t want to update. With RHEL8, we can now upgrade one while keeping the other in its current version.

Cet article Red Hat Enterprise Linux 8 – Application Streams est apparu en premier sur Blog dbi services.

SQL Plan Directives

Tom Kyte - Thu, 2018-12-13 08:26
Team, reading this article <u>https://blogs.oracle.com/oraclemagazine/on-learning-from-mistakes</u> and setting up an demo like this in my local database. In the below example pretend OWNER as "state" and OBJECT_TYPE as "city" (similar to the examp...
Categories: DBA Blogs

To arrive alphanumeric value from a string

Tom Kyte - Thu, 2018-12-13 08:26
Hi Tom, I have a problem to arrive unique alphanumeric value I have to arrive unique short name(5digit) from names and insert into the table. We have to check existing short names and arrive next heirarchy. Eg.The names are like below ...
Categories: DBA Blogs

List tablespaces and usage

Tom Kyte - Thu, 2018-12-13 08:26
how can List all the tablespaces in the ORCL database by displaying the available space, used space, status and type. Also permanent and temporary files related to those tablespaces in a separate query.
Categories: DBA Blogs

Inconsistent behavior with CROSS APPLY and OUTER APPLY

Tom Kyte - Thu, 2018-12-13 08:26
This question is also asked on http://stackoverflow.com/questions/42593148/inconsistent-behavior-with-cross-apply-and-outer-apply by a Graphql collaborator who is developing pagination for supporting Oracle db. have a schema in Oracle-12c similar...
Categories: DBA Blogs

PCF Heathwatch 1.4 just got a new UI

Pas Apicella - Thu, 2018-12-13 04:45
PCF Healthwatch is a service for monitoring and alerting on the current health, performance, and capacity of PCF to help operators understand the operational state of their PCF deployment

Finally got around to installing the new PCF Healthwatch 1.4 and the first thing which struck me was the UI main dashboard page. It's clear what I need to look at in seconds and the alerts on the right hand side also useful

Some screen shots below





papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-99$ http http://healthwatch.run.haas-99.pez.pivotal.io/info
HTTP/1.1 200 OK
Content-Length: 45
Content-Type: application/json;charset=utf-8
Date: Thu, 13 Dec 2018 10:26:07 GMT
X-Vcap-Request-Id: ac309698-74a6-4e94-429a-bb5673c1c8f7

{
    "message": "PCF Healthwatch available"
}

More Information

Pivotal Cloud Foundry Healthwatch
https://docs.pivotal.io/pcf-healthwatch/1-4/index.html

Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator