A collection of information technology and consulting knowledge
Category: Science and Information Area
Information science is a field primarily concerned with the analysis, collection, classification, manipulation, storage, retrieval, movement, dissemination, and protection of information. Practitioners within and outside the field study application and usage of knowledge in organizations along with the interaction between people, organizations, and any existing information systems with the aim of creating, replacing, improving, or understanding information systems. Historically, information science is associated with computer science, psychology, and technology. However, information science also incorporates aspects of diverse fields such as archival science, cognitive science, commerce, law, linguistics, museology, management, mathematics, philosophy, public policy, and social sciences.
denodo 7.0 saves some manual coding when building the ‘Base Views’ by performing some initial data type conversions from ANSI SQL type to denodo Virtual DataPort data types. So, where is a quick reference mapping to show to what the denodo Virtual DataPort Data Type mappings are:
ANSI SQL types To Virtual DataPort Data types Mapping
ANSI SQL Type
Virtual DataPort Type
BIT VARYING (n)
CHARACTER VARYING (n)
DECIMAL (n, m)
NUMERIC (n, m)
TIMESTAMP WITH TIME ZONE
VARCHAR ( MAX )
ANSI SQL Type Conversion Notes
The function CAST truncates the output when converting a value to a text, when these two conditions are met:
You specify a SQL type with a length for the target data type. E.g. VARCHAR(20).
And, this length is lower than the length of the input value.
When casting a boolean to an integer, true is mapped to 1 and false to 0.
Every day, businesses
are creating around 2.5 quintillion bytes of data, making it increasingly
difficult to make sense and get valuable information from this data. And while
this data can reveal a lot about customer bases, users, and market patterns and
trends, if not tamed and analyzed, this data is just useless. Therefore, for
organizations to realize the full value of this big data, it has to be
processed. This way, businesses can pull powerful insights from this stockpile
And thanks to artificial
intelligence and machine learning, we can now do away with mundane spreadsheets
as a tool to process data. Through the various AI and ML-enabled data analytics
models, we can now transform the vast volumes of data into actionable insights
that businesses can use to scale operational goals, increase savings, drive
efficiency and comply with industry-specific requirements.
We can broadly classify
data analytics into three distinct models:
Let’s examine each of
these analytics models and their applications.
Analytics. A Look Into What happened?
How can an organization
or an industry understand what happened in the past to make decisions for the
future? Well, through descriptive analytics.
Descriptive analytics is
the gateway to the past. It helps us gain insights into what has happened.
Descriptive analytics allows organizations to look at historical data and gain
actionable insights that can be used to make decisions for “the now” and the
future, upon further analysis.
For many businesses,
descriptive analytics is at the core of their everyday processes. It is the
basis for setting goals. For instance, descriptive analytics can be used to set
goals for better customer experience. By looking at the number of tickets
raised in the past and their resolutions, businesses can use ticketing trends
to plan for the future.
Some everyday applications of descriptive analytics include:
Reporting of new trends
and disruptive market changes
Tabulation of social
metrics such as the number of tweets, followers gained over some time, or
Facebook likes garnered on a post.
Summarizing past events
such as customer retention, regional sales, or marketing campaigns success.
To enhance their decision-making
capabilities businesses have to reduce the data further to allow them to make
better future predictions. That’s where predictive analytics comes in.
Analytics takes Descriptive Data One Step Further
Using both new and
historical data sets predictive analytics to help businesses model and forecast
what might happen in the future. Using various data mining and statistical
algorithms, we can leverage the power of AI and machine learning to analyze
currently available data and model it to make predictions about future
behaviors, trends, risks, and opportunities. The goal is to go beyond the data
surface of “what has happened and why it has happened” and identify what will
analytics allows organizations to be prepared and become more proactive, and
therefore make decisions based on data and not assumptions. It is a robust
model that is being used by businesses to increase their competitiveness and
protect their bottom line.
analytics process is a step-by-step process that requires analysts to:
deliverables and business objectives
Collect historical and
new transactional data
Analyze the data to
identify useful information. This analysis can be through inspection, data
cleaning, data transformation, and data modeling.
Use various statistical
models to test and validate the assumptions.
predictive models about the future.
Deploy the data to guide
your day-to-data actions and decision-making processes.
Manage and monitor the
model performance to ensure that you’re getting the expected results.
Predictive Analytics Can be Used
campaigns and reach customer service objectives.
Improve operations by
forecasting inventory and managing resources optimally.
Fraud detection such as
false insurance claims or inaccurate credit applications
Risk management and
Determine the best
direct marketing strategies and identify the most appropriate channels.
Help in underwriting by
predicting the chances of bankruptcy, default, or illness.
Health care: Use
predictive analytics to determine health-related risk and make informed
clinical support decisions.
Analytics: Developing Actionable Insights from Descriptive Data
helps us to find the best course of action for a given situation. By studying
interactions between the past, the present, and the possible future scenarios,
prescriptive analytics can provide businesses with the decision-making power to
take advantage of future opportunities while minimizing risks.
Intelligence (AI) and Machine Learning (ML), we can use prescriptive analytics
to automatically process new data sets as they are available and provide the
most viable decision options in a manner beyond any human capabilities.
When effectively used,
it can help businesses avoid the immediate uncertainties resulting from
changing conditions by providing them with fact-based best and worst-case
scenarios. It can help organizations limit their risks, prevent fraud,
fast-track business goals, increase operational efficiencies, and create more
Bringing It All
you can see, different big data analytics models can help you add more sense to
raw, complex data by leveraging AI and machine learning. When effectively done,
descriptive, predictive, and prescriptive analytics can help businesses realize
better efficiencies, allocate resources more wisely, and deliver superior
customer success most cost-effectively. But ideally, if you wish to gain
meaningful insights from predictive or even prescriptive analytics, you must
start with descriptive analytics and then build up from there.
Occasionally, I need to update the windows hosts files, but I seem to have a permanent memory block where the file is located. I have written the location into numerous documents, however, every time I need to verify and or up the host file I need to look up the path. Today, when I went to look it up I discovered that I had not actually posted it to this blog site. So, for future reference, I am adding it now.
Here is the path of the Windows Hosts file, the drive letter may change depending on the drive letter on which the Windows install was performed.
Chances are you’ve
participated in changing or upgrading software or application. While the idea
is to push new features to customers or end-users, there have been significant
changes over the years in how dev teams build and deliver applications. This
shift has been necessitated by the growing need for agility in businesses.
Today, enterprises are
pushing their teams to deliver new product features, more often, more rapidly, and
with minimal interruptions to the end-user experience. Ultimately, this has led
to Shorter deployment cycles that translates to:
Quicker customer to
the production team, leading to faster fixes on bugs and faster iterations on
More value to
customers within shorter times
between the development, test, and release teams.
But what has changed
over the years, really? In this article, we’ll talk about the shift in
deployment strategies from traditional deployment approaches to the newer and
more agile deployment methods and the pros and cons of each of the strategy.
Let’s dive in, shall
Deployment Strategies. The “one-and-done” Approach.
strategy required dev teams to update large parts of an application and
sometimes the entire application in one swoop. The implementation happened in
one instance, and all users moved to the newer system immediately on a rollout.
This deployment model
required businesses to conduct extensive and sometimes difficult development
and testing of the monolithic systems before releasing the final application to
on-site installations, the end-users relied on plug-and-install to get the
latest versions of an application. Since the new application updates were
delivered as a whole new package, the user’s hardware and infrastructure had to
be compatible with the software or system for it to run smoothly. Also, the
end-user needed hours of training on critical updates and how to make use of
of Traditional Deployment Strategies
Low operational costs:
Traditional deployment models had lower operating expenses since all
departments were replaced on a single day. Also, since most of the applications
were vendor-packaged solutions like desktop apps or non-production systems,
there were minimal maintenance expenses needed after installation.
requirements. This means that teams would just start coding without tons of
requirements and specification documents.
They worked well for
small projects with small teams.
Faster Return on
Investment since the changes occurred site-wide for all user hence better
returns across the departments
of Traditional Deployments
strategies presented myriad challenges, including:
It was extremely risky
to roll back to the older version in case of severe bugs and errors.
Since this model had no formal planning, no formal leadership, or even standard
coding practices, the model was prone to costly mistakes down the line that
would cost the enterprise money, reputation, and even loss of customers.
It needed a lot of
time and manpower to test.
It was too basic for
modular, complex projects.
It needed separate
teams of developers, testers, and operations. Such huge teams were slow and
High user disruptions
and major downtimes. Due to the roll-over, organizations would experience a
“catch-up” period that had low productivity effect as users tried to adapt to
the new system.
As seen above,
traditional deployment methodologies were rigorous and sometimes had a lot of
repetitive tasks that consumed staggering amounts of coding hours and resources
that could otherwise have been used in working on core application features.
This kind of deployment approach would not just cut it in today’s fast-paced
economies where enterprises are looking for lean, highly-effective teams that
can quickly deliver high-quality apps to the market.
The solution was to
come up with deployment strategies that allowed enterprises to release and
update different components frequently and seamlessly. These deployments
sometimes happen very fast to meet the increasing end-user needs. For instance,
a mobile app can have several deployments within a day for optimum UX needs.
This is made possible with the adoption of more agile approaches such as
Blue-Green or Red-Black deployments.
Is “Blue-Green” Vs. “Red-Black” Deployments?
Blue-green and red-black
deployments are fail-safe, immutable deployment processes for cloud-native
applications and virtualized or containerized services — ideally,
Blue-Green” Vs. “Red-Black” Deployments are identical and are
designed to reduce application downtime and minimize risks by running 2
identical production environments.
Unlike the traditional
approach where engineers fix the failed features by deploying an older stable
version of the application, the Blue-green or the red-black deployment approach
is super-agile, more scalable, and is highly automated so that bugs and updates
are done seamlessly.
“Blue-Green” Vs. “Red-Black”. What’s
Both Blue-Green and
Red-Black deployments represent similar concepts in the sense that they both
apply to the automatable cloud or containerized services such as a web service
or SaaS Systems. Ideally, once the dev team has made an update or an upgrade,
the release team will create two mirror production environments with identical
sets of hardware and route traffic to one of the environments while they test
the other idle environment.
what is the difference between the two?
The only difference
lies in the amount of traffic routed to the live and idle environments.
redeployment, the new release is deployed to the red-environment while
maintaining the traffic to the black-environment. All smoke test for
functionality and performance can be run on the black environment without
affecting how the end-user is using the system. When the new updates have been
confirmed to be working properly, and the black version is fully operational,
the traffic is then moved to the new environment by simply changing the router
configuration from red to black. This ensures near-zero downtime with the
This is similar to
blue-green deployments. Only that, with blue/green deployments, it is possible
for both environments to get requests at the same time temporarily through
load-balancing, unlike the red/black deployment, where only one version can get
traffic at any given time. This means that in blue-green deployments,
enterprises can release the new version of the application to a select group of
users to test and give feedback before the system goes live for all users.
“Blue-Green” Or “Red-Black” Deployments
You can roll back the
traffic to the still-operating environment seamlessly and with near-zero user
Allows teams to test
their disaster recovery procedures in a live production environment
Less Risky since test
teams can run full regression testing before releasing the new versions
Since the two codes
are already loaded on the mirror environments and traffic to the live site is
unaffected, test and release teams have no time pressures to push the release
of the new code.
Blue-Green and Red-Black Deployments
infrastructure to carry out Blue-Green deployments.
It can lead to
significant downtime if you’re running hybrid microservice apps and some
traditional apps together.
Database migrations are sometimes tricky and would need to be migrated
alongside the app deployment
The new CentOS 8 rebuild
is out. Christened version 8.0-1905, this release provides a secure, stable and
a more reliable foundation for CentOS users such as organizations running
high-performance websites and businesses with Linus experts that use CentOS
daily for their workloads, but who do not need strong commercial support.
The new OS comes in
after Red Hat released RHEL 8 – Red Hat Enterprise Linux – in May of this year.
According to CentOS 8 release notes, the contributors note that this rebuild is
100% compliant with Red Hat’s redistribution policy. This Linux distro allows
users to achieve successful operations using the robust power of an
enterprise-class OS, but without the cost of support and certification. Below
are some of the updates as outlined in CentOS 8 release notes that you can
expect with this new release and some of the deprecated features.
What’s New in the Just Released CentOS 8?
BaseOS and Appstream
New container tools
TCP stack improvements
· BaseOS and Appstream
The main repository or
Base Operating System offers the components of distribution that in turn
provide the running user space on the hardware, virtual machines, or even a
container. The Application Stream or App stream offers all the apps you might
want to run in particular user space. The Supplemental repository offers
software that comes with special licensing.
· New Container Tools
With the aid of Podman,
CentOS 8 supports Linux Containers. This replaces Docker and Mobdy, which
depend on daemon and run as root. Unlike the previous release, the Podman in
the new version does not depend on daemon. Podman allows users to create images
from scratch using Buildah.
· Systemwide Crypto Policies
The command “update
crypto policies” can be used to update the system-wide cryptographic policy on
the new OS. The policies have settings for the following applications and
libraries; NSS TLS library, Kerberos 5 library, Open SSH SSH2 protocol
implementation, IKE protocol implementation & Libreswan IPsec, Open SSL TLS
library and GnuTLS
· TCP Stack Improvements
The CentOS 8 Linux
distro also brings with it TCP stack version 4.16 with an improved ingress
connection rate. The Linux kernel is now able to support the new BBR and NV
control algorithms. This is very helpful in helping improve the Linux server
· DNF – Dandified Yum
The new Operating System
includes the basic foundations of the Yum package but is now upgraded to the
DNF (Dandified Yum). Though it maintains a similar command-line interface and
API to its predecessor, it does promise to be faster, seamless and
The CentOS also has a
compiler based on the version 8.2 and includes support for more recent C ++
language standard versions, improved optimizations, more code, and hardening
techniques as well as new hardware support and better warnings.
In addition to those
features, the new CentOS 8 also supports secure guests, which using
cryptographically signed images will ensure that the program retains its
integrity. It also boasts of improved management of memory and support. CentOS
8 release notes state that the new OS will allow the Crash dump to take in
kernel crash during all booting phases which were not possible before.
CentOS 8 gives encrypted
storage to LUKS2. It also allows for enhancements made to the process scheduler
to include the new deadline process scheduler. This Linux distro will also
enable installations and boot from dual-in-line, non-volatile memory modules.
A great bonus feature is
that you can manage the new software with Cockpit via a web browser. This
feature is very user-friendly, making it great for system administrators and
new users alike.
If you are upgrading
from previous CentOS versions, the most significant change is seen in the
nftables framework which has replaced iptables. Nfatables allows users to
perform network address translation (NAT) mangling, packet classification, and
packet filtering. Unlike iptables, nfatables helps to provide secure firewall
support with enhanced performance, increased scalability, and easy code
These changes, though
not major, may cause problems with firewall functionality. Although upgrades
using RHEL may be supported, it is not advisable to upgrade directly from much
older versions of CentOS like CentOs 6 and below as they may not be compatible.
Users of CentOS as a
desktop will see an update of the GNOME SHELL default interface to version
3.28, while still carrying the default display server as Wayland.
If you are looking to
upgrade from previous versions, a system to do so directly is yet to be
released. As such, your most favorable option would be to back up your data as
you install the newly released CentOS 8. When it is up and running, you can
then move all the data to the new system.
Nonetheless, the new
CentOS 8 Linux release is an exciting feat. This OS provides a manageable and
consistent platform that suits a wide variety of deployments. It comes with
well-thought-out and ingenious software updates that will help avid users to
build more robust container workloads and web apps.
A Denodo virtualization project typically classifies the
project duties of the primary implementation team into four Primary roles.
Denodo Data Virtualization Project Roles
Data Virtualization Architect
Denodo Platform Administrator
Data Virtualization Developer
Denodo Platform Java Programmer
Data Virtualization Internal Support Team
Project Team Member Alignment
While the denodo project is grouped into security permissions and a set of duties, it is import to note that the assignment of the roles can be very dynamic as to their assignment among project team members. Which team member who performs a given role can change the lifecycle of a denodo project. One team member may hold more than one role at any given time or acquire or lose roles based on the needs of the project.
virtualization Project Roles Duties
The knowledge, responsibilities, and duties of a denodo data
virtualization architect, include:
A Deep understanding of denodo security features
and data governance
Define and document5 best practices for users,
roles, and security permissions.
Have a strong understanding of enterprise
Defines data virtualization architecture and
Guides the definition and documentation of the
virtual data model, including, delivery modes, data sources, data combination,
The knowledge, responsibilities, and duties of a Denodo Platform
Denodo Platform Installation and maintenance, such as,
Installs denodo platform servers
Defines denodo platform update and upgrade policies
Creates, edits, and removes environments, clusters, and servs
Manages denodo licenses
Defines denodo platform backup policies
Defines procedures for artifact promotion between environments
Denodo platform configuration and management, such as,
Configures denodo platform server ports
Platform memory configuration and Java Virtual Machine (VM) options
Set the maximum number of concurrent requests
Set up database configuration
Specific cache server
Authentication configuration for users connecting to denodo platform (e.g., LDAP)
Secures (SSL) communications connections of denodo components
Provides connectivity credentials details for clients tools/applications (JDBC, ODBC,,,etc.)
Configuration of resources.
Setup Version Control System (VCS) configuration for denodo
Creates new Virtual Databases
Create Users, roles, and assigns privileges/roles.
Execute diagnostics and monitoring operations, analyzes logs and identifies potentials issues
Manages load balances variables
The Data Virtualization Developer role is divided into the
the knowledge, responsibilities, and duties of a Denodo Data
Virtualization Developer, by sub-role, Include:
The denodo data engineer’s duties include:
Implements the virtual data model construction
Importing data sources and creating base views,
Creating derived views applying combinations and
transformations to the datasets
Writes documentation, defines testing to eliminate
development errors before code promotion to other environments
The denodo business developer’s duties include:
Creates business vies for a specific business
area from derived and/or interface views
Implements data services delivery
The denodo application developer’s duties include:
Creates reporting vies from business views for
reports and or datasets frequently consumed by users
Denodo Platform Java
The Denodo Platform Java Programmer role is an optional,
specialized, role, which:
Creates custom denodo components, such as data sources, stored procedures, and VDP/iTPilot functions.
Implements custom filters in data routines
Tests and debugs any custom components using Denodo4e
Internal Support Team
The denodo data virtualization internal support team’s duties
Access to and knowledge of the use and trouble
of developed solutions
Tools and procedures to manage and support
project users and developers