While trying to create user in Oracle Database 18c Express Edition I kept getting an “ORA-65096: invalid common user or role name” error, which didn’t make sense to me so after validating my command, that I was signed in as an admin user, and determining that my “CREATE USER” was formatted correctly. I did some additional research and determined that in the hidden parameter “_ORACLE_SCRIPT” needed to be set to “True” starting with the Oracle Version 12c and higher.
the “_ORACLE_SCRIPT” values
To set the “_ORACLE_SCRIPT” hidden variable to “True” you need to run an “Alter” command. Then you will be able to create the desired user and run your grants commands as usual.
Oracle provides a few ways to determine which database you are working in. Admittedly, I usually know which database I’m working in, but recently I did an Oracle Database Express Edition (XE) install which did not goes has expected and I had reason to confirm which database I was actually in when the SQL*Plus session opened. So, this lead me to consider how one would prove exactly which database they were connected to. As it happens, Oracle has a few ways to quickly display which database you are connected to and here are two easy ways to find out your Oracle database name in SQL*Plus:
the GLOBAL_NAME table
The First method is to run a quick-select against the GLOBAL_NAME
table, which. is publicly available to logged-in users of the database
Example GLOBAL_NAME Select Statement
select * from global_name;
the V$DATABASE Variable
The second method is to run a quick-select a V$database.
However, not everyone will have access to the V$database variable.
data-driven decision making is at the center of all things. The emergence of
data science and machine learning has further reinforced the importance of data
as the most critical commodity in today’s world. From FAAMG (the biggest five
tech companies: Facebook, Amazon, Apple, Microsoft, and Google) to governments
and non-profits, everyone is busy leveraging the power of data to achieve final
goals. Unfortunately, this growing demand for data has exposed the inefficiency
of the current systems to support the ever-growing data needs. This
inefficiency is what led to the evolution of what we today know as Logical Data
What Is a Logical
simple words, a data lake is a data repository that is capable of storing any
data in its original format. As opposed to traditional data sources that
use the ETL (Extract, Transform, and Load) strategy, data lakes work on the ELT
(Extract, Load, and Transform) strategy. This means data does not have to be
first transformed and then loaded, which essentially translates into reduced
time and efforts. Logical data lakes have captured the attention of
millions as they do away with the need to integrate data from different data
repositories. Thus, with this open access to data, companies can now begin to
draw correlations between separate data entities and use this exercise to their
Primary Use Case
Scenarios of Data Lakes
Logical data lakes are a
relatively new concept, and thus, readers can benefit from some knowledge of
how logical data lakes can be used in real-life scenarios.
Experimental Analysis of Data:
Logical data lakes can
play an essential role in the experimental analysis of data to establish its
value. Since data lakes work on the ELT strategy, they grant deftness and speed
to processes during such experiments.
To store and
analyze IoT Data:
Logical data lakes can
efficiently store the Internet of Things type of data. Data lakes are capable
of storing both relational as well as non-relational data. Under logical data
lakes, it is not mandatory to define the structure or schema of the data
stored. Moreover, logical data lakes can run analytics on IoT data and come up
with ways to enhance quality and reduce operational cost.
To improve Customer
Logical data lakes can
methodically combine CRM data with social media analytics to give businesses an
understanding of customer behavior as well as customer churn and its various
To create a Data
Logical data lakes
contain raw data. Data warehouses, on the other hand, store structured and
filtered data. Creating a data lake is the first step in the process of data
warehouse creation. A data lake may also be used to augment a data warehouse.
reporting and analytical function:
Data lakes can also be
used to support the reporting and analytical function in organizations. By
storing maximum data in a single repository, logical data lakes make it easier
to analyze all data to come up with relevant and valuable findings.
A logical data lake is a comparatively new area of study. However, it can be said with certainty that logical data lakes will revolutionize the traditional data theories.
The private cloud concept is running the cloud software architecture and, possibly specialized hardware, within a companies’ own facilities and support by the customer’s own employees, rather than having it hosted from a data center operated by commercial providers like Amazon, IBM Microsoft, or Oracle.
private (internal) cloud may be a one or more of these patterns and may be part
of a larger hybrid-cloud strategy.
Home-Grown, where the company has built its own software and or hardware could infrastructure where the private could is managed entirely by the companies’ resources.
Commercial-Off-The-Self (COTS), where the cloud software and or hardware is purchased from a commercial vendor and install in the companies promises where is it is primarily managed by the companies’ resources with licensed technical support from the vendor.
Appliance-Centric, where vendor specialty hardware and software are pre-assembled and pre-optimized, usually on proprietary databases to support a specific cloud strategic.
Hybrid-Cloud, which may use some or all of the about approaches and have added components such as:
Virtualization software to integrate, private-cloud, public-cloud, and non-cloud information resources into a central delivery architecture.
Public/Private cloud where proprietary and customer sensitive information is kept on promise and less sensitive information is housed in one or more public clouds. The Public/Private hybrid-cloud strategy can also be provision temporary short duration increases in computational resources or where application and information development occur in the private cloud and migrated to a public cloud for productionalization.
In the modern technological era, there are a variety of cloud patterns, but this explanation highlights the major aspects of the private cloud concept which should clarify and assist in strategizing for your enterprise cloud.
Data virtualization is a data management approach that allows retrieving and manipulation of data without requiring technical data details like where the data is physically located or how the data is formatted at the source. Denodo is a data virtualization platform that offers more use cases than those supported by many data virtualization products available today. The platform supports a variety of operational, big data, web integration, and typical data management use cases helpful to technical and business teams. By offering real-time access to comprehensive information, Denodo helps businesses across industries execute complex processes efficiently. Here are 10 Denodo data virtualization use cases.
1. Big data analytics
Denodo is a popular data virtualization tool for examining large data sets to uncover hidden patterns, market trends, and unknown correlations, among other analytical information that can help in making informed decisions.
2. Mainstream business intelligence and data warehousing
Denodo can collect corporate data from external data sources and operational systems to allow data consolidation, analysis as well as reporting to present actionable information to executives for better decision making. In this use case, the tool can offer real-time reporting, logical data warehouse, hybrid data virtualization, data warehouse extension, among many other related applications.
3. Data discovery
Denodo can also be used for self-service business intelligence and reporting as well as “What If” analytics.
4. Agile application development
Data services requiring software development where requirements and solutions keep evolving via the collaborative effort of different teams and end-users can also benefit from Denodo. Examples include Agile service-oriented architecture and BPM (business process management) development, Agile portal & collaboration development as well as Agile mobile & cloud application development.
5. Data abstraction for modernization and migration
Denodo also comes in handy when reducing big data sets to allow for data migration and modernizations. Specific applications for this use case include, but aren’t limited to data consolidation processes in mergers and acquisitions, legacy application modernization and data migration to the cloud.
6. B2B data services & integration
Denodo also supports big data services for business partners. The platform can integrate data via web automation.
7. Cloud, web and B2B integration
Denodo can also be used in social media integration, competitive BI, web extraction, cloud application integration, cloud data services, and B2B integration via web automation.
8. Data management & data services infrastructure
Denodo can be used for unified data governance, providing a canonical view of data, enterprise data services, virtual MDM, and enterprise business data glossary.
9. Single view application
The platform can also be used for call centers, product catalogs, and vertical-specific data applications.
10. Agile business intelligence
Last but not least, Denodo can be used in business intelligence projects to improve inefficiencies of traditional business intelligence. The platform can develop methodologies that enhance outcomes of business intelligence initiatives. Denodo can help businesses adapt to ever-changing business needs. Agile business intelligence ensures business intelligence teams and managers make better decisions in shorter periods.
With over two decades of innovation, applications in 35+ industries and multiple use cases discussed above, it’s clear why Denodo a leading platform in data virtualization.
Developed by Apache Software Foundation, Apache Derby DB is a completely free, open-source relational database system developed purely with Java. It has multiple advantages that make it a popular choice for Java applications requiring small to medium-sized databases.
With over 15 years in development, Derby DB had time to grow, add new and improve on the existing components. Even though it has an extremely small footprint – only 3.5MB of all JAR files – Derby is a full-featured ANSI SQL database, supporting all the latest SQL standards, transactions, and security factors.
The small footprint adds to its versatility and portability – Derby can easily be embedded into Java applications with almost no performance impact. It’s extremely easy to install and configure, requiring almost no administration afterward. Once implemented, there is no need to further modify or set up the database at the end user’s computer. Alongside the embedded framework, Derby can also be used in a more familiar server mode.
All documentation containing different manuals for specific versions of Derby can be found on their official website, at :
Java is compatible with almost all the different platforms, including Windows, Linux, and MacOS. Since Derby DB is implemented completely in Java, it can be easily transferred without the need for different distribution downloads. It can use all types of Java Virtual Machines as long as they’re properly certified. Apache’s Derby includes the Derby code without any modification to the elemental source code.
Derby supports transactions, which are executed for quick and secure data retrieval from the database as well as referential integrity. Even though the stored procedures are made in Java, in the client/server mode Derby can bind to PHP, Python and Perl programming languages.
All data is encrypted, with support for database triggers to maintain the integrity of the information. Alongside that, custom made functions can be created with any Java library so the users can manipulate the data however they want.
Embedded and Server
Derby’s embedded mode is usually recommended as a beginner-friendly option. The main differences are in who manages the database along with how it’s stored.
When Derby is integrated as a whole and becomes a part of the main program, it acts as a persistent data store and the database is managed through the application. It also runs within the Java Virtual Machine of the application. In this mode, no other user is able to access the database – only the app that it is integrated into. As a result of these limits, the embedded mode is most useful for single-user apps.
If it’s run in server mode, the user starts a Derby network server which is tasked with responding to database requests. Derby runs in a Java Virtual Machine that hosts the server. The database is loaded onto the server, waiting for client applications to connect to it. This is the most typical architecture used by most of the other bigger databases, such as MySQL. Server mode is highly beneficial when more than one user needs to have access to the database across the network.
Derby has to be downloaded and extracted from the .zip package before being used. Downloads can be found at the Apache’s official website:
Interacting with Derby is done through the use of ‘ij’ tool, which is an interactive JDBC scripting program. It can be used for running interactive queries and scripts against a Derby database. The ij tool is run through the command shell.
The initial Derby connection command differs depending on whether it’s going to be run in embedded or server mode.
With modern businesses continually looking for ways to
streamline their operations, DevOps has become a common approach to software
delivery used by development and operation teams to set up, test, deploy, and
To help you understand more about this approach, let’s
briefly discuss DevOps.
What is DevOps?
DevOps comes from two words- ‘development and operations.’ It
describes a set of IT practices, which seeks to have software developers and
operations team work together on the same project in a more collaborative and
In simple words, this is a culture that promotes cooperation
between Development and Operations teams in an organization to ensure faster
production in an automated, recurring manner.
The approach aims at breaking down traditional barriers that
have existed between these two important teams of the IT department in any
organization. When deployed smoothly, this approach can help reduce time and
friction that occur when deploying new software applications in an
These efforts lead to quicker development cycles, which
ultimately save money and time, and give an organization a competitive edge
against its rivals with longer, more ridged development cycles.
DevOps helps to increase the speed with which an organization
delivers applications and services to customers, thereby competing favorably
and actively in the market.
Needed for DevOps to Be Successful Executed?
For an organization to appeal to customers, it must be agile,
lean, and swift to respond to dynamic demands in the market. For this to happen, all stakeholders in the
delivery process have to work together.
Development teams, which focus on designing, developing,
delivering, and running the software reliably and quickly, need to work with
the operations team, which is tasked with the work of identifying and resolving
problems in the software as soon as possible.
By having a common approach across software developers and
operation teams, an organization will be able to monitor and analyze holdups and
scale as quickly as possible. This way, they will be able to deliver and deploy
reliable software in a shorter time.
We hope that our simplified guide has enabled you to understand
what DevOps is and why it is important in modern organizations.