Oracle Express Edition Error – ORA-65096: invalid common user or role name

While trying to create user in Oracle Database 18c Express Edition I kept getting an “ORA-65096: invalid common user or role name” error, which didn’t make sense to me so after validating my command, that I was signed in as an admin user, and determining that my “CREATE USER” was formatted correctly.  I did some additional research and determined that in the hidden parameter “_ORACLE_SCRIPT” needed to be set to “True” starting with the Oracle Version 12c and higher.

Setting the “_ORACLE_SCRIPT” values

To set the “_ORACLE_SCRIPT” hidden variable to “True” you need to run an “Alter” command. Then you will be able to create the desired user and run your grants commands as usual.

Alter SQL Command

alter session set “_oracle_script”=true;

How to Determine Your Oracle Database Name

Oracle provides a few ways to determine which database you are working in.  Admittedly, I usually know which database I’m working in, but recently I did an Oracle Database Express Edition (XE) install which did not goes has expected and I had reason to confirm which database I was actually in when the SQL*Plus session opened.  So, this lead me to consider how one would prove exactly which database they were connected to.  As it happens, Oracle has a few ways to quickly display which database you are connected to and here are two easy ways to find out your Oracle database name in SQL*Plus:

  • V$DATABASE
  • GLOBAL_NAME

Checking the GLOBAL_NAME table

The First method is to run a quick-select against the GLOBAL_NAME table, which. is publicly available to logged-in users of the database

Example GLOBAL_NAME Select Statement

select * from global_name;

Checking the V$DATABASE Variable

The second method is to run a quick-select a V$database. However, not everyone will have access to the V$database variable.

Example V$database Select Statement

select name from V$database;

Using Logical Data Lakes

Today, data-driven decision making is at the center of all things. The emergence of data science and machine learning has further reinforced the importance of data as the most critical commodity in today’s world. From FAAMG (the biggest five tech companies: Facebook, Amazon, Apple, Microsoft, and Google) to governments and non-profits, everyone is busy leveraging the power of data to achieve final goals. Unfortunately, this growing demand for data has exposed the inefficiency of the current systems to support the ever-growing data needs. This inefficiency is what led to the evolution of what we today know as Logical Data Lakes.

What Is a Logical Data Lake?

In simple words, a data lake is a data repository that is capable of storing any data in its original format. As opposed to traditional data sources that use the ETL (Extract, Transform, and Load) strategy, data lakes work on the ELT (Extract, Load, and Transform) strategy. This means data does not have to be first transformed and then loaded, which essentially translates into reduced time and efforts. Logical data lakes have captured the attention of millions as they do away with the need to integrate data from different data repositories. Thus, with this open access to data, companies can now begin to draw correlations between separate data entities and use this exercise to their advantage.

Primary Use Case Scenarios of Data Lakes

Logical data lakes are a relatively new concept, and thus, readers can benefit from some knowledge of how logical data lakes can be used in real-life scenarios.

To conduct Experimental Analysis of Data:

  • Logical data lakes can play an essential role in the experimental analysis of data to establish its value. Since data lakes work on the ELT strategy, they grant deftness and speed to processes during such experiments.

To store and analyze IoT Data:

  • Logical data lakes can efficiently store the Internet of Things type of data. Data lakes are capable of storing both relational as well as non-relational data. Under logical data lakes, it is not mandatory to define the structure or schema of the data stored. Moreover, logical data lakes can run analytics on IoT data and come up with ways to enhance quality and reduce operational cost.

To improve Customer Interaction:

  • Logical data lakes can methodically combine CRM data with social media analytics to give businesses an understanding of customer behavior as well as customer churn and its various causes.

To create a Data Warehouse:

  • Logical data lakes contain raw data. Data warehouses, on the other hand, store structured and filtered data. Creating a data lake is the first step in the process of data warehouse creation. A data lake may also be used to augment a data warehouse.

To support reporting and analytical function:

  • Data lakes can also be used to support the reporting and analytical function in organizations. By storing maximum data in a single repository, logical data lakes make it easier to analyze all data to come up with relevant and valuable findings.

A logical data lake is a comparatively new area of study. However, it can be said with certainty that logical data lakes will revolutionize the traditional data theories.

Related References

A 720-Degree View of the Customer

The 360-degree view of the consumer is a well-explored concept, but it is not adequate in the digital age. Every firm, whether it is Google or Amazon, is deploying tools to understand customers in a bid to serve them better. A 360-degree view demanded that a company consults its internal data to segment customers and create marketing strategies. It has become imperative for companies to look outside their channels, to platforms like social media and reviews to gain insight into the motivations of their customers. The 720-degree view of the customer is further discussed below.

What is the 720-degree view of the customer?

A 720-degree view of the customer refers to a three-dimensional understanding of customers, based on deep analytics. It includes information on every customer’s level of influence, buying behavior, needs, and patterns. A 720-degree view will enable retailers to offer relevant products and experiences and to predict future behavior. If done right, this concept should assist retailers leverage on emerging technologies, mobile commerce, social media, and cloud-based services, and analytics to sustain lifelong customer relationships

What Does a 720-Degree View of the Customer Entail?

Every business desires to cut costs, gain an edge over its competitors, and grow their customer base. So how exactly will a 720-degree view of the customer help a firm advance its cause?

Social Media

Social media channels help retailers interact more effectively and deeply with their customers. It offers reliable insights into what customers would appreciate in products, services, and marketing campaigns. Retailers can not only evaluate feedback, but they can also deliver real-time customer service. A business that integrates its services with social media will be able to assess customer behavior through tools like dislikes and likes. Some platforms also enable customers to buy products directly.

Customer Analytics


Customer analytics will construct more detailed customer profiles by integrating different data sources like demographics, transactional data, and location. When this internal data is added to information from external channels like social media, the result is a comprehensive view of the customer’s needs and wants. A firm will subsequently implement more-informed decisions on inventory, supply chain management, pricing, marketing, customer segmentation, and marketing. Analytics further come in handy when monitoring transactions, personalized services, waiting times, website performance.

Mobile Commerce

The modern customer demands convenience and device compatibility. Mobile commerce also accounts for a significant amount of retail sales, and retailers can explore multi-channel shopping experiences. By leveraging a 720-degree view of every customer, firms can provide consumers with the personalized experiences and flexibility they want. Marketing campaigns will also be very targeted as they will be based on the transactional behaviors of customers. Mobile commerce can take the form of mobile applications for secure payment systems, targeted messaging, and push notifications to inform consumers of special offers. The goal should be to provide differentiated shopper analytics.

Cloud

Cloud-based solutions provide real-time data across multiple channels, which illustrates an enhanced of the customer. Real-time analytics influence decision-making in retail and they also harmonize the physical and retail digital environments. The management will be empowered to detect sales trends as transactions take place.

The Importance of the 720-Degree Customer View

Traditional marketers were all about marketing to groups of similar individuals, which is often termed as segmentation. This technique is, however, giving way to the more effective concept of personalized marketing. Marketing is currently channeled through a host of platforms, including social media, affiliate marketing, pay-per-click, and mobile. The modern marketer has to integrate the information from all these sources and match them to a real name and address. Companies can no longer depend on a fragmented view of the customer, as there has to be an emphasis on personalization. A 720-degree customer view can offer benefits like:

Customer Acquisition

Firms can improve customer acquisition by depending on the segment differences revealed from a new database of customer intelligence. Consumer analytics will expose any opportunities to be taken advantage of while external data sources will reveal competitor tactics. There are always segment opportunities in any market, which are best revealed by real-time consumer data.

Cutting Costs

Marketers who rely on enhanced digital data can contribute to cost management in a firm. It takes less investment to serve loyal and satisfied consumers because a firm is directing addressing their needs. Technology can be used to set customized pricing goals and to segment customers effectively.

New Products and Pricing

Real-time data, in addition to third-party information, have a crucial impact on pricing. Only firms with a robust and relevant competitor and customer analytics and data can take advantage of this importance. Marketers with a 720-degree view of the consumer across many channels will be able to utilize opportunities for new products and personalized pricing to support business growth

Advance Customer Engagement

The first 360 degrees include an enterprise-wide and timely view of all consumer interactions with the firm. The other 360 degrees consists of the customer’s relevant online interactions, which supplements the internal data a company holds. The modern customer is making their buying decisions online, and it is where purchasing decisions are influenced. Can you predict a surge in demand before your competitors? A 720-degree view will help you anticipate trends while monitoring the current ones.

720-degree Customer View and Big Data

Firms are always trying to make decision making as accurate as possible, and this is being made more accessible by Big Data and analytics. To deliver customer-centric experiences, businesses require a 720-degree view of every customer collected with the help of in-depth analysis.

Big Data analytical capabilities enable monitoring of after-sales service-associated processes and the effective management of technology for customer satisfaction. A firm invested in being in front of the curve should maintain relevant databases of external and internal data with global smart meters. Designing specific products to various segments is made easier with the use of Big Data analytics. The analytics will also improve asset utilization and fault prediction. Big Data helps a company maintain a clearly-defined roadmap for growth

Conclusion

It is the dream of every enterprise to tap into customer behavior and create a rich profile for each customer. The importance of personalized customer experiences cannot be understated in the digital era. The objective remains to develop products that can be advertised and delivered to customers who want them, via their preferred platforms, and at a lower cost. 

10 Denodo Data Virtualization Use Cases

Data virtualization is a data management approach that allows retrieving and manipulation of data without requiring technical data details like where the data is physically located or how the data is formatted at the source.
Denodo is a data virtualization platform that offers more use cases than those supported by many data virtualization products available today. The platform supports a variety of operational, big data, web integration, and typical data management use cases helpful to technical and business teams.
By offering real-time access to comprehensive information, Denodo helps businesses across industries execute complex processes efficiently. Here are 10 Denodo data virtualization use cases.

1. Big data analytics

Denodo is a popular data virtualization tool for examining large data sets to uncover hidden patterns, market trends, and unknown correlations, among other analytical information that can help in making informed decisions. 

2. Mainstream business intelligence and data warehousing

Denodo can collect corporate data from external data sources and operational systems to allow data consolidation, analysis as well as reporting to present actionable information to executives for better decision making. In this use case, the tool can offer real-time reporting, logical data warehouse, hybrid data virtualization, data warehouse extension, among many other related applications. 

3. Data discovery 

Denodo can also be used for self-service business intelligence and reporting as well as “What If” analytics. 

4. Agile application development

Data services requiring software development where requirements and solutions keep evolving via the collaborative effort of different teams and end-users can also benefit from Denodo. Examples include Agile service-oriented architecture and BPM (business process management) development, Agile portal & collaboration development as well as Agile mobile & cloud application development. 

5. Data abstraction for modernization and migration

Denodo also comes in handy when reducing big data sets to allow for data migration and modernizations. Specific applications for this use case include, but aren’t limited to data consolidation processes in mergers and acquisitions, legacy application modernization and data migration to the cloud.

6. B2B data services & integration

Denodo also supports big data services for business partners. The platform can integrate data via web automation. 

7. Cloud, web and B2B integration

Denodo can also be used in social media integration, competitive BI, web extraction, cloud application integration, cloud data services, and B2B integration via web automation. 

8. Data management & data services infrastructure

Denodo can be used for unified data governance, providing a canonical view of data, enterprise data services, virtual MDM, and enterprise business data glossary. 

9. Single view application

The platform can also be used for call centers, product catalogs, and vertical-specific data applications. 

10. Agile business intelligence

Last but not least, Denodo can be used in business intelligence projects to improve inefficiencies of traditional business intelligence. The platform can develop methodologies that enhance outcomes of business intelligence initiatives. Denodo can help businesses adapt to ever-changing business needs. Agile business intelligence ensures business intelligence teams and managers make better decisions in shorter periods.

With over two decades of innovation, applications in 35+ industries and multiple use cases discussed above, it’s clear why Denodo a leading platform in data virtualization.

Use and Advantages of Apache Derby DB


Developed by Apache Software Foundation, Apache Derby DB is a completely free, open-source relational database system developed purely with Java. It has multiple advantages that make it a popular choice for Java applications requiring small to medium-sized databases.

Reliable and Secure

With over 15 years in development, Derby DB had time to grow, add new and improve on the existing components. Even though it has an extremely small footprint – only 3.5MB of all JAR files – Derby is a full-featured ANSI SQL database, supporting all the latest SQL standards, transactions, and security factors.

The small footprint adds to its versatility and portability – Derby can easily be embedded into Java applications with almost no performance impact. It’s extremely easy to install and configure, requiring almost no administration afterward. Once implemented, there is no need to further modify or set up the database at the end user’s computer. Alongside the embedded framework, Derby can also be used in a more familiar server mode.

All documentation containing different manuals for specific versions of Derby can be found on their official website, at :

Cross-Platform Support

Java is compatible with almost all the different platforms, including Windows, Linux, and MacOS. Since Derby DB is implemented completely in Java, it can be easily transferred without the need for different distribution downloads. It can use all types of Java Virtual Machines as long as they’re properly certified. Apache’s Derby includes the Derby code without any modification to the elemental source code.

Derby supports transactions, which are executed for quick and secure data retrieval from the database as well as referential integrity. Even though the stored procedures are made in Java, in the client/server mode Derby can bind to PHP, Python and Perl programming languages. 

All data is encrypted, with support for database triggers to maintain the integrity of the information. Alongside that, custom made functions can be created with any Java library so the users can manipulate the data however they want. 

Embedded and Server Modes

Derby’s embedded mode is usually recommended as a beginner-friendly option. The main differences are in who manages the database along with how it’s stored. 

When Derby is integrated as a whole and becomes a part of the main program, it acts as a persistent data store and the database is managed through the application. It also runs within the Java Virtual Machine of the application. In this mode, no other user is able to access the database – only the app that it is integrated into. As a result of these limits, the embedded mode is most useful for single-user apps.

If it’s run in server mode, the user starts a Derby network server which is tasked with responding to database requests. Derby runs in a Java Virtual Machine that hosts the server. The database is loaded onto the server, waiting for client applications to connect to it. This is the most typical architecture used by most of the other bigger databases, such as MySQL. Server mode is highly beneficial when more than one user needs to have access to the database across the network.

Downloading Derby

Derby has to be downloaded and extracted from the .zip package before being used. Downloads can be found at the Apache’s official website:

Numerous download options are presented on there, depending on the Java version that the package is going to be used with. 

Using Derby requires having Java Development Kit (JDK) pre-installed on the system and then configuring the environment to use the JDBC driver. Official tutorials can be found at:

Running and Manipulating Derby DB

Interacting with Derby is done through the use of ‘ij’ tool, which is an interactive JDBC scripting program. It can be used for running interactive queries and scripts against a Derby database. The ij tool is run through the command shell.

The initial Derby connection command differs depending on whether it’s going to be run in embedded or server mode.

For a tutorial on how to use the connect commands, check out https://www.vogella.com/tutorials/ApacheDerby/article.html.

Some Useful Derby DB Documentation

Derby Reference Manual‎: ‎

Derby Server and Administration Guide‎: ‎

API Reference‎:

Derby Tools and Utilities Guide‎: ‎

ij Basics

In conclusion, Derby DB is a lightweight yet efficient tool that can be integrated into various types of Java applications with ease.

How the IBM Common SQL Engine (CSE) Improves DB2

Common SQL Engine (CSE)
Common SQL Engine (CSE)

Today, newfound efficiencies and innovation are key to any business success – small, medium or large. In the rapidly evolving field of data analytics, innovative approaches to handling data are particularly important since data is the most valuable resource any business can have. IBM common SQL Engine is delivering application and query compatibility that is allowing companies to turn their data into actionable insights. This is allowing businesses to unleash the power of their databases without constraints.

But, is this really important?

Yes. Many businesses have accumulated tons of data over the years. This data resides in higher volumes, more locations throughout an enterprise – on-premise and on-cloud –, and in greater variety. Typically, this data should be a huge advantage, providing enterprises with actionable insights. But, often, this doesn’t happen.

IBM Hybrid Data Management.

With such a massive barrel of complex legacy data, many organizations find it confusing to decide what to do with it. Or where to start. The process of migrating all that data into new systems is simply a non-starter. As a solution, enterprises are turning to IBM Db2 – a hybrid, intuitive data approach that marries data and analytics seamlessly. IBM Db2 hybrid data management allows flexible cloud and on-premises deployment of data.

However, such levels of flexibility typically require organizations to rewrite or restructure their queries, and applications that will use the diverse, ever-changing data. These changes may even require you to license new software. This is costly and unfeasible. To bridge this gap, the Common SQL Engine (CSE) comes into play.

How IBM Common SQL Engine is Positioning Db2 for the Future?

The IBM Common SQL Engine inserts a single layer of data abstraction at the very data source. This means that, instead of migrating the data all at once, you can now apply data analytics wherever the data resides – whether on private, public or hybrid cloud – by using the Common SQL Engine as a bridge.

The IBM’s Common SQL Engine provides portability and consistency of SQL commands, meaning that the SQL is functionally portable across multiple implementations. It allows seamless movement of workloads to the cloud and allows for multiplatform integration and configurations regardless of their programming language.

Ideally, the Common SQL Engine is supposed to be the heart of the query and the foundation of application compatibility. But it does so much more!

Its compatibility extends beyond data analytic applications to include security, management, governance, data management, and other functionalities as well.

How does this improve the quality, flexibility, and portability of Db2?

By allowing for integration across multiple platforms, workloads and programming languages, the Common SQL Engine, ultimately, leads to a “data without limits” environment for Db2 hybrid data management family through:

  1. Query and application compatibility

The Common SQL engine (CSE) ensures that users can write a query, and be confident that it will work across the Db2 hybrid data management family of offerings. With the CSE, you can change your data infrastructure and location – on-cloud or on-premises – without having to worry about license costs and application compatibility.

  1. Data virtualization and Integration

The common SQL engine has a built-in data virtualization service that ensures that you can access your data from all your sources. These services position Db2 family of offerings including, IBM Db2 warehouse, IBM Db2, IBM Db2 BigSQL amongst others.

This services also applies to IBM Integrated Analytics System, Teradata, Oracle, Puredata and Microsoft SQL server. Besides, you can work seamlessly with open-source solutions such as HIVE; and cloud sources such as Amazon Redshift. Such levels of integration are unprecedented!

By allowing users to effectively pull data from Db2 data stores and integrate it with data from non-IBM stores using a single query, the common SQL engine places Db2 at an authoritative position as compared to other data stores.

  1. Flexible Licensing

Licensing is one of the hardest nuts to crack, especially for smart organizations who rely on technologies such as the cloud to deliver their services. While application compatibility and data integration will save you time, flexible licensing saves you money, on the spot.

IBM’s common SQL engine allows flexible licensing, meaning that you can purchase one license model and deploy it whenever needed, or as your data architecture evolves. Using IBM’s FlexPoint licensing, you can purchase FlexPoints and use them across all Db2 data management offerings. This is a convenience in one place.

The flexible licensing will not only simplify the adoption and exchange of platform capabilities, but it also positions your business strategically by making it more agile. Your data managers will be able to access the tools needed on the fly, without going through a lethargic and tedious procurement process.

IBM Db2 Data Management Family Is Supported by Common SQL Engine (CSE) .

IBM Db2 is a family of custom, deployable database that allows enterprises to leverage existing investments. IBM Db2 allows businesses to use any type of data from an either structured or unstructured database (or data warehouse). It provides the right data foundation/environment with industry-leading data compression, on-premise and cloud deployment options, modern data security, robust performance for mixed loads and the ability to adjust and scale without redesigning.

The IBM Db2 family enable businesses to adapt, scale quickly and remain competitive without compromising security, risk levels or privacy. It features:

  • Always-on availability
  • Deployment and flexibility: On-premises, scale-on demand, and private or cloud deployments• Compression and performance
  • Embedded IoT technology is allowing businesses to act fast on the fly.

Some of these Db2 family offerings that are supported by the common SQL engine include:

  • Db2 Database
  • Db2 Hosted
  • Db2 Big SQL
  • Db2 on Cloud
  • Db2 Warehouse
  • Db2 Warehouse on Cloud
  • IBM Integrated Analytics System (IIAS)

Db2 Family Offerings and Beyond

Since the common SQL engine mainly focuses on data federation and propensity, other non-IBM databases can as well plug into the engine for SQL processing. These other 3rd party offerings include:

  • Watson Data Platform
  • Oracle
  • Hadoop
  • Microsoft SQL Server
  • Teradata
  • Hive

Conclusion

IBM Common SQL engine is allowing organizations to fully use data analytics to future-proof their business, and as well remain agile and competitive. In fact, besides the benefits of having robust tools woven into CSE, this SQL engine offers superior analytics and machine-learning positioning. Data processing can now happen at the speed of light –- 2X to 5X faster. The IBM Common SQL engine adds important capabilities to Db2, including freedom of location, freedom of use, and freedom of assembly.

Related References