As the data modeling process in denodo moves through the conceptual layers of the data warehouse, there is an evolution of the data structure and their associated metadata.
The Base Layer
As the modeling process begins the base layer is the ingestion layer, where the source system data structures are recreated in denodo and field are transformed in denodo Virtual Query Language (VQL) data types. the Business layer is what folks with a traditional data warehousing background would think of as Staging or landing. These base layer views should most closely mirror the technical structure and data characteristics of the input data source and will be the least business friend in their organization, naming, and metadata.
The Semantics layer
The semantics layer is where the major data reorganization, data transformation, and the application of business friend field names and metadata begins. The semantics layer is what folks with a traditional data warehousing background would think of as the Data Warehouse (DW) or Enterprise Data Warehouse (EDW). The semantics layer of the logical data warehouse (LDW) performs serval tasks:
Data from multiple input sources are consolidated
The model becomes multi-dimensional (Fact and Dimension oriented)
Field names and descriptive metadata are changed to meaningful, domain normalized, business-friendly names and descriptions.
Domain normalizing business rules and transformations are applied.
Serves as a data source for the business layer and reporting layer.
The Business Layer
The business layer, which is considered optional by denodo, is modeled along a more narrow business subject orientation and more specialized business rules are applied. This is what folks with a traditional data warehousing background would think of as a Datamart (DM).
The business layer of the logical data warehouse (LDW) performs serval tasks:
Limits and optimizes the data to facilitate business intelligence and report activities concerning a specific line of business or business topic (e.g. Financials, Human Resources, Inventory, Asset management, etc. )
Business-specific/customized rules and metadata are applied
Supplements the semantic layer and serves as a data source for the reporting layer.
Additional data consolidation and data structure denormalization (flattening) may occur in the business layer
The Reporting Layer
The reporting layer, which is considered optional by denodo, is the most customized layer and sees the most reporting topic specialization and specific need transformation. The reporting layer is where a traditional data warehousing may provide customized reporting, or system interface views, interface ETL’s to produce interface files, and reporting team do more of their own development.
The reporting layer of the logical data warehouse (LDW) performs serval tasks:
Provides consumer-specific customized rules and metadata
Provides consumer-specific data organization/layouts
Data is optimized for consumer purposes and may be highly or entirely denormalized to meet consumer needs.
The denodo catalog provides the data governance and self-service capabilities to supplement the denodo Virtual DataPort (VDP) core capabilities. Six roles provide the ability to assign or deny capabilities with the denodo data catalog and supplement the database, row, and column security and permissions of denodo Virtual DataPort (VDP).
The Tasks The Roles Can Perform
Denodo Data Catalog Role Name
Assign categories, tags and custom properties groups to views and web services.
Edit views, web services, and databases. Create, edit and delete tags, categories, custom properties groups, and custom properties.
Can do the same as a user with the roles “data_catalog_editor” and “data_catalog_classifier”.
Configure personalization options and content search.
This role can perform any action of all the other data catalog roles.
The exporter role can export the results of a query from the Denodo Data Catalog.
denodo > User Manuals > Denodo Platform New Features Guide
Over recent years, business
enterprises relying on accurate and consistent data to make informed decisions
have been gravitating towards integration technologies. The subject of
Enterprise Application Integration (EAI) and Extraction, Transformation &
Loading (ETL) lately seems to pop up in most Enterprise Information Management
From an architectural perspective,
both techniques share a striking similarity. However, they essentially serve
different purposes when it comes to information management. We’ve decided to do
a little bit of research and establish the differences between the two
EAI is an integration framework that
consists of technologies and services, allowing for seamless coordination of
vital systems, processes, as well as databases across an enterprise.
Simply put, this integration
technique simplifies and automates your business processes to a whole new level
without necessarily having to make major changes to your existing data
structures or applications.
With EAI, your business can
integrate essential systems like supply chain management, customer relationship
management, business intelligence, enterprise resource planning, and payroll.
Well, the linking of these apps can be done at the back end via APIs or the
front end GUI.
The systems in question might use
different databases, computer languages, exist on different operating systems
or older systems that might not be supported by the vendor anymore.
The objective of EAI is to develop a
single, unified view of enterprise data and information, as well as ensure the
information is correctly stored, transmitted, and reflected. It enables
existing applications to communicate and share data in real-time.
Transformation & Loading
The general purpose of an ETL system
is to extract data out of one or more source databases and then transfer it to
a target destination system for better user decision making. Data in the target
system is usually presented differently from the sources.
The extracted data goes through the
transformation phase, which involves checking for data integrity and converting
the data into a proper storage format or structure. It is then moved into other
systems for analysis or querying function.
With data loading, it typically
involves writing data into the target database destination like data warehouse
and operational data store.
ETL can integrate data from multiple
systems. The systems we’re talking about in this case are often hosted on
separate computer hardware or supported by different vendors.
ETL and EAI
Retrieves small amounts of data in
one operation and is characterized by a high number of transactions
EAI system is utilized for process
optimization and workflow
The system does not require user
involvement after it’s implemented
Ensures a bi-directional data flow
between the source and target applications
Ideal for real-time business data
Limited data validation
Integrating operations is pull, push,
It is a one-way process of creating
a historical record from homogeneous or heterogeneous sources
Mainly designed to process large
batches of data from source systems
Requires extensive user involvement
Meta-data driven complex
Integrating operation is a pull,
Supports proper profiling and data
Limited messaging capabilities
Both integration technologies are an
essential part of EIM, as they provide strong capabilities for business
intelligence initiatives and reporting. They can be used differently and
sometimes in mutual consolidation.
A Denodo virtualization project typically classifies the
project duties of the primary implementation team into four Primary roles.
Denodo Data Virtualization Project Roles
Data Virtualization Architect
Denodo Platform Administrator
Data Virtualization Developer
Denodo Platform Java Programmer
Data Virtualization Internal Support Team
Project Team Member Alignment
While the denodo project is grouped into security permissions and a set of duties, it is import to note that the assignment of the roles can be very dynamic as to their assignment among project team members. Which team member who performs a given role can change the lifecycle of a denodo project. One team member may hold more than one role at any given time or acquire or lose roles based on the needs of the project.
virtualization Project Roles Duties
The knowledge, responsibilities, and duties of a denodo data
virtualization architect, include:
A Deep understanding of denodo security features
and data governance
Define and document5 best practices for users,
roles, and security permissions.
Have a strong understanding of enterprise
Defines data virtualization architecture and
Guides the definition and documentation of the
virtual data model, including, delivery modes, data sources, data combination,
The knowledge, responsibilities, and duties of a Denodo Platform
Denodo Platform Installation and maintenance, such as,
Installs denodo platform servers
Defines denodo platform update and upgrade policies
Creates, edits, and removes environments, clusters, and servs
Manages denodo licenses
Defines denodo platform backup policies
Defines procedures for artifact promotion between environments
Denodo platform configuration and management, such as,
Configures denodo platform server ports
Platform memory configuration and Java Virtual Machine (VM) options
Set the maximum number of concurrent requests
Set up database configuration
Specific cache server
Authentication configuration for users connecting to denodo platform (e.g., LDAP)
Secures (SSL) communications connections of denodo components
Provides connectivity credentials details for clients tools/applications (JDBC, ODBC,,,etc.)
Configuration of resources.
Setup Version Control System (VCS) configuration for denodo
Creates new Virtual Databases
Create Users, roles, and assigns privileges/roles.
Execute diagnostics and monitoring operations, analyzes logs and identifies potentials issues
Manages load balances variables
The Data Virtualization Developer role is divided into the
the knowledge, responsibilities, and duties of a Denodo Data
Virtualization Developer, by sub-role, Include:
The denodo data engineer’s duties include:
Implements the virtual data model construction
Importing data sources and creating base views,
Creating derived views applying combinations and
transformations to the datasets
Writes documentation, defines testing to eliminate
development errors before code promotion to other environments
The denodo business developer’s duties include:
Creates business vies for a specific business
area from derived and/or interface views
Implements data services delivery
The denodo application developer’s duties include:
Creates reporting vies from business views for
reports and or datasets frequently consumed by users
Denodo Platform Java
The Denodo Platform Java Programmer role is an optional,
specialized, role, which:
Creates custom denodo components, such as data sources, stored procedures, and VDP/iTPilot functions.
Implements custom filters in data routines
Tests and debugs any custom components using Denodo4e
Internal Support Team
The denodo data virtualization internal support team’s duties
Access to and knowledge of the use and trouble
of developed solutions
Tools and procedures to manage and support
project users and developers
data-driven decision making is at the center of all things. The emergence of
data science and machine learning has further reinforced the importance of data
as the most critical commodity in today’s world. From FAAMG (the biggest five
tech companies: Facebook, Amazon, Apple, Microsoft, and Google) to governments
and non-profits, everyone is busy leveraging the power of data to achieve final
goals. Unfortunately, this growing demand for data has exposed the inefficiency
of the current systems to support the ever-growing data needs. This
inefficiency is what led to the evolution of what we today know as Logical Data
What Is a Logical
simple words, a data lake is a data repository that is capable of storing any
data in its original format. As opposed to traditional data sources that
use the ETL (Extract, Transform, and Load) strategy, data lakes work on the ELT
(Extract, Load, and Transform) strategy. This means data does not have to be
first transformed and then loaded, which essentially translates into reduced
time and efforts. Logical data lakes have captured the attention of
millions as they do away with the need to integrate data from different data
repositories. Thus, with this open access to data, companies can now begin to
draw correlations between separate data entities and use this exercise to their
Primary Use Case
Scenarios of Data Lakes
Logical data lakes are a
relatively new concept, and thus, readers can benefit from some knowledge of
how logical data lakes can be used in real-life scenarios.
Experimental Analysis of Data:
Logical data lakes can
play an essential role in the experimental analysis of data to establish its
value. Since data lakes work on the ELT strategy, they grant deftness and speed
to processes during such experiments.
To store and
analyze IoT Data:
Logical data lakes can
efficiently store the Internet of Things type of data. Data lakes are capable
of storing both relational as well as non-relational data. Under logical data
lakes, it is not mandatory to define the structure or schema of the data
stored. Moreover, logical data lakes can run analytics on IoT data and come up
with ways to enhance quality and reduce operational cost.
To improve Customer
Logical data lakes can
methodically combine CRM data with social media analytics to give businesses an
understanding of customer behavior as well as customer churn and its various
To create a Data
Logical data lakes
contain raw data. Data warehouses, on the other hand, store structured and
filtered data. Creating a data lake is the first step in the process of data
warehouse creation. A data lake may also be used to augment a data warehouse.
reporting and analytical function:
Data lakes can also be
used to support the reporting and analytical function in organizations. By
storing maximum data in a single repository, logical data lakes make it easier
to analyze all data to come up with relevant and valuable findings.
A logical data lake is a comparatively new area of study. However, it can be said with certainty that logical data lakes will revolutionize the traditional data theories.
The 360-degree view of
the consumer is a well-explored concept, but it is not adequate in the digital
age. Every firm, whether it is Google or Amazon, is deploying tools to
understand customers in a bid to serve them better. A 360-degree view demanded
that a company consults its internal data to segment customers and create
marketing strategies. It has become imperative for companies to look outside
their channels, to platforms like social media and reviews to gain insight into
the motivations of their customers. The 720-degree view of the customer is
further discussed below.
What is the
720-degree view of the customer?
A 720-degree view of the customer refers to a
three-dimensional understanding of customers, based on deep analytics. It
includes information on every customer’s level of influence, buying behavior,
needs, and patterns. A 720-degree view will enable retailers to offer relevant
products and experiences and to predict future behavior. If done right, this
concept should assist retailers leverage on emerging technologies, mobile
commerce, social media, and cloud-based services, and analytics to sustain
lifelong customer relationships
What Does a
720-Degree View of the Customer Entail?
Every business desires to cut costs, gain an
edge over its competitors, and grow their customer base. So how exactly will a
720-degree view of the customer help a firm advance its cause?
Social media channels help retailers interact
more effectively and deeply with their customers. It offers reliable insights
into what customers would appreciate in products, services, and marketing
campaigns. Retailers can not only evaluate feedback, but they can also deliver
real-time customer service. A business that integrates its services with social
media will be able to assess customer behavior through tools like dislikes and
likes. Some platforms also enable customers to buy products directly.
Customer analytics will construct more detailed customer profiles by
integrating different data sources like demographics, transactional data, and
location. When this internal data is added to information from external
channels like social media, the result is a comprehensive view of the customer’s
needs and wants. A firm will subsequently implement more-informed decisions on
inventory, supply chain management, pricing, marketing, customer segmentation,
and marketing. Analytics further come in handy when monitoring transactions,
personalized services, waiting times, website performance.
The modern customer demands convenience and
device compatibility. Mobile commerce also accounts for a significant amount of
retail sales, and retailers can explore multi-channel shopping experiences. By
leveraging a 720-degree view of every customer, firms can provide consumers
with the personalized experiences and flexibility they want. Marketing
campaigns will also be very targeted as they will be based on the transactional
behaviors of customers. Mobile commerce can take the form of mobile
applications for secure payment systems, targeted messaging, and push
notifications to inform consumers of special offers. The goal should be to
provide differentiated shopper analytics.
Cloud-based solutions provide real-time data across multiple channels, which illustrates an enhanced of the customer. Real-time analytics influence decision-making in retail and they also harmonize the physical and retail digital environments. The management will be empowered to detect sales trends as transactions take place.
The Importance of
the 720-Degree Customer View
Traditional marketers were all about marketing
to groups of similar individuals, which is often termed as segmentation. This technique
is, however, giving way to the more effective concept of personalized
marketing. Marketing is currently channeled through a host of platforms,
including social media, affiliate marketing, pay-per-click, and mobile. The
modern marketer has to integrate the information from all these sources and
match them to a real name and address. Companies can no longer depend on a
fragmented view of the customer, as there has to be an emphasis on
personalization. A 720-degree customer view can offer benefits like:
Firms can improve customer acquisition by
depending on the segment differences revealed from a new database of customer
intelligence. Consumer analytics will expose any opportunities to be taken
advantage of while external data sources will reveal competitor tactics. There
are always segment opportunities in any market, which are best revealed by
real-time consumer data.
Marketers who rely on enhanced digital data can
contribute to cost management in a firm. It takes less investment to serve
loyal and satisfied consumers because a firm is directing addressing their
needs. Technology can be used to set customized pricing goals and to segment
New Products and
Real-time data, in addition to third-party information, have a crucial impact on pricing. Only firms with a robust and relevant competitor and customer analytics and data can take advantage of this importance. Marketers with a 720-degree view of the consumer across many channels will be able to utilize opportunities for new products and personalized pricing to support business growth
The first 360 degrees include an enterprise-wide
and timely view of all consumer interactions with the firm. The other 360
degrees consists of the customer’s relevant online interactions, which
supplements the internal data a company holds. The modern customer is making
their buying decisions online, and it is where purchasing decisions are
influenced. Can you predict a surge in demand before your competitors? A
720-degree view will help you anticipate trends while monitoring the current
View and Big Data
Firms are always trying to make decision making
as accurate as possible, and this is being made more accessible by Big Data and
analytics. To deliver customer-centric experiences, businesses require a
720-degree view of every customer collected with the help of in-depth analysis.
Big Data analytical capabilities enable monitoring
of after-sales service-associated processes and the effective management of
technology for customer satisfaction. A firm invested in being in front of the
curve should maintain relevant databases of external and internal data with
global smart meters. Designing specific products to various segments is made
easier with the use of Big Data analytics. The analytics will also improve
asset utilization and fault prediction. Big Data helps a company maintain a
clearly-defined roadmap for growth
It is the dream of every enterprise to tap into
customer behavior and create a rich profile for each customer. The importance
of personalized customer experiences cannot be understated in the digital era.
The objective remains to develop products that can be advertised and delivered
to customers who want them, via their preferred platforms, and at a lower