Why Unit Testing is Important

Whenever a new application is in development, unit testing is a vital part of the process and is typically performed by the developer. During this process, sections of code are isolated at a time and are systematically checked to ensure correctness, efficiency, and quality. There are numerous benefits to unit testing, several of which are outlined below.

1. Maximizing Agile Programming and Refactoring

During the coding process, a programmer has to keep in mind a myriad of factors to ensure that the final product correct and as lightweight, as is possible for it to be. However, the programmer also needs to make certain that if changes become necessary, refactoring can be safely and easily done.

Unit testing is the simplest way to assist in making for agile programming and refactoring because the isolated sections of code have already been tested for accuracy and help to minimize refactoring risks.

2. Find and Eliminate Any Bugs Early in the Process

Ultimately, the goal is to find no bugs and no issues to correct, right? But unit testing is there to ensure that any existing bugs are found early on so that they can be addressed and corrected before additional coding is layered on. While it might not feel like a positive thing to have a unit test reveal a problem, it’s good that it’s catching the issue now so that the bug doesn’t affect the final product.

3. Document Any and All Changes

Unit testing provides documentation for each section of coding that has been separated, allowing those who haven’t already directly worked with the code to locate and understand each individual section as necessary. This is invaluable in helping developers understand unit APIs without too much hassle.

4. Reduce Development Costs

As one can imagine, fixing problems after the product is complete is both time-consuming and costly. Not only do you have to sort back through a fully coded application’s worth of material, any bugs which may have been compounded and repeated throughout the application. Unit testing helps not only limit the amount of work that needs to be done after the application is completed it also reduces the time it takes to fix errors because it prevents developers from having to fix the same problem more than once.

5. Assists in Planning

Thanks to the documentation aspect of unit testing, developers are forced to think through the design of each individual section of code so that its function is determined before it’s written. This can prevent redundancies, incomplete sections, and nonsensical functions because it encourages better planning. Developers who implement unit testing in their applications will ultimately improve their creative and coding abilities thanks to this aspect of the process.

Conclusion

Unit testing is absolutely vital to the development process. It streamlines the debugging process and makes it more efficient, saves on time and costs for the developers, and even helps developers and programmers improve their craft through strategic planning. Without unit testing, people would inevitably wind up spending far more time on correcting problems within the code, which is both inefficient and incredibly frustrating. Using unit tests is a must in the development of any application.

Related References

Personas Vs. Roles – What Is The Difference?

Personas and roles are user modeling approaches that are applied in the early stages of system development or redesign. They drive the design decision and allows programmers and designers to place everyday user needs at the forefront of their system development journey in a user-centered design approach.

Personas and user roles help improve the quality of user experience when working with products that require a significant amount of user interaction. But there is a distinct difference between technology personas vs. roles. What then exactly is a persona? What are user roles in system development? And, how does persona differ from user roles?

Let’s see how these two distinct, yet often confused, user models fit in a holistic user-centered design process and how you can leverage them to identify valuable product features.

Technology Personas Vs. Roles – The Most Relevant Way to Describe Users

In software development, a user role describes the relationship between a user type and a software tool. It is generally the user’s responsibility when using a system or the specific behavior of a user who is participating in a business process. Think of roles as the umbrella, homogeneous constructs of the users of a particular system. For instance, in an accounting system, you can have roles such as accountant, cashier, and so forth.

However, by merely using roles, system developers, designers, and testers do not have sufficient information to conclusively make critical UX decisions that would make the software more user-centric, and more appealing to its target users.

This lack of understanding of the user community has led to the need for teams to move beyond role-based requirements and focus more on subsets of the system users. User roles can be refined further by creating “user stand-ins,” known as personas. By using personas, developers and designers can move closer to the needs and preferences of the user in a more profound manner than they would by merely relying on user roles.

In product development, user personas are an archetype of a fictitious user that represents a specific group of your typical everyday users. First introduced by Alan Cooper, personas help the development team to clearly understand the context in which the ideal customer interacts with a software/system and helps guide the design decision process.

Ideally, personas provide team members with a name, a face, and a description for each user role. By using personas, you’re typically personalizing the user roles, and by so doing, you end up creating a lasting impression on the entire team. Through personas, team members can ask questions about the users.

The Benefits of Persona Development

Persona development has several benefits, including:

  • They help team members have a consistent understanding of the user group.
  • They provide stakeholders with an opportunity to discuss the critical features of a system redesign.
  • Personas help designers to develop user-centric products that have functions and features that the market already demands.
  • A persona helps to create more empathy and a better understanding of the person that will be using the end product. This way, the developers can design the product with the actual user needs in mind.
  • Personas can help predict the needs, behaviors, and possible reactions of the users to the product.

What Makes Up a Well-Defined Persona?

Once you’ve identified user roles that are relevant to your product, you’ll need to create personas for each. A well-defined persona should ideally take into consideration the needs, goals, and observed behaviors of your target audience. This will influence the features and design elements you choose for your system.

The user persona should encompass all the critical details about your ideal user and should be presented in a memorable way that everyone in the team can identify with and understand. It should contain four critical pieces of information.

1. The header

The header aid in improving memorability and creating a connection between the design team and the user. The header should include:

  • A fictional name
  • An image, avatar or a stock photo
  • A vivid description/quote that best describes the persona as it relates to the product.

2. Demographic Profile

Unlike the name and image, which might be fictitious, the demographic profile includes factual details about the ideal user. The demographic profile includes:

  • Personal background: Age, gender, education, ethnicity, persona group, and family status
  • Professional background: Occupation, work experience, and income level.
  • User environment. It represents the social, physical, and technological context of the user. It answers questions like: What devices do the user have? Do they interact with other people? How do they spend their time?
  • Psychographics: Attitudes, motivations, interests, and user pain points.

3. End Goal(s)

End goals help answer the questions: What problems or needs will the product solution to the user? What are the motivating factors that inspire the user’s actions?

4. Scenario

This is a narrative that describes how the ideal user would interact with your product in real-life to achieve their end goals. It should explain the when, the where, and the how.

Conclusion

For a truly successful user-centered design approach, system development teams should use personas to provide simple descriptions of key user roles. While a distinct difference exists in technology personas vs. roles, design teams should use the two user-centered design tools throughout the project to decide and evaluate the functionality of their end product. This way, they can deliver a useful and usable solution to their target market.

denodo SQL Type Mapping

denodo 7.0 saves some manual coding when building the ‘Base Views’ by performing some initial data type conversions from ANSI SQL type to denodo Virtual DataPort data types. So, where is a quick reference mapping to show to what the denodo Virtual DataPort Data Type mappings are:

ANSI SQL types To Virtual DataPort Data types Mapping

ANSI SQL TypeVirtual DataPort Type
BIT (n)blob
BIT VARYING (n)blob
BOOLboolean
BYTEAblob
CHAR (n)text
CHARACTER (n)text
CHARACTER VARYING (n)text
DATElocaldate
DECIMALdouble
DECIMAL (n)double
DECIMAL (n, m)double
DOUBLE PRECISIONdouble
FLOATfloat
FLOAT4float
FLOAT8double
INT2int
INT4int
INT8long
INTEGERint
NCHAR (n)text
NUMERICdouble
NUMERIC (n)double
NUMERIC (n, m)double
NVARCHAR (n)text
REALfloat
SMALLINTint
TEXTtext
TIMESTAMPtimestamp
TIMESTAMP WITH TIME ZONEtimestamptz
TIMESTAMPTZtimestamptz
TIMEtime
TIMETZtime
VARBITblob
VARCHARtext
VARCHAR ( MAX )text
VARCHAR (n)text

ANSI SQL Type Conversion Notes

  • The function CAST truncates the output when converting a value to a text, when these two conditions are met:
  1. You specify a SQL type with a length for the target data type. E.g. VARCHAR(20).
  2. And, this length is lower than the length of the input value.
  • When casting a boolean to an integertrue is mapped to 1 and false to 0.

Related References

denodo 7.0 Type Conversion Functions

What Are “Blue-Green” Or “Red-Black” Deployments?

Deployment
Deployment

Chances are you’ve participated in changing or upgrading software or application. While the idea is to push new features to customers or end-users, there have been significant changes over the years in how dev teams build and deliver applications. This shift has been necessitated by the growing need for agility in businesses.

Today, enterprises are pushing their teams to deliver new product features, more often, more rapidly, and with minimal interruptions to the end-user experience. Ultimately, this has led to Shorter deployment cycles that translates to:

  • Reduced time-to-market
  • More updates
  • Quicker customer to the production team, leading to faster fixes on bugs and faster iterations on features
  • More value to customers within shorter times
  • More coordination between the development, test, and release teams.

But what has changed over the years, really? In this article, we’ll talk about the shift in deployment strategies from traditional deployment approaches to the newer and more agile deployment methods and the pros and cons of each of the strategy.

Let’s dive in, shall we?

Traditional Deployment Strategies. The “one-and-done” Approach.

Classic deployments strategy required dev teams to update large parts of an application and sometimes the entire application in one swoop. The implementation happened in one instance, and all users moved to the newer system immediately on a rollout.

This deployment model required businesses to conduct extensive and sometimes difficult development and testing of the monolithic systems before releasing the final application to the market.

Characterized by on-site installations, the end-users relied on plug-and-install to get the latest versions of an application. Since the new application updates were delivered as a whole new package, the user’s hardware and infrastructure had to be compatible with the software or system for it to run smoothly. Also, the end-user needed hours of training on critical updates and how to make use of the deliverable.

Pros of Traditional Deployment Strategies

Low operational costs: Traditional deployment models had lower operating expenses since all departments were replaced on a single day. Also, since most of the applications were vendor-packaged solutions like desktop apps or non-production systems, there were minimal maintenance expenses needed after installation.

No planning requirements. This means that teams would just start coding without tons of requirements and specification documents.

They worked well for small projects with small teams.

Faster Return on Investment since the changes occurred site-wide for all user hence better returns across the departments

Cons of Traditional Deployments

Traditional deployment strategies presented myriad challenges, including:

  • It was extremely risky to roll back to the older version in case of severe bugs and errors.
  • Potentially expensive: Since this model had no formal planning, no formal leadership, or even standard coding practices, the model was prone to costly mistakes down the line that would cost the enterprise money, reputation, and even loss of customers.
  • It needed a lot of time and manpower to test.
  • It was too basic for modular, complex projects.
  • It needed separate teams of developers, testers, and operations. Such huge teams were slow and lethargic.
  • High user disruptions and major downtimes. Due to the roll-over, organizations would experience a “catch-up” period that had low productivity effect as users tried to adapt to the new system.

As seen above, traditional deployment methodologies were rigorous and sometimes had a lot of repetitive tasks that consumed staggering amounts of coding hours and resources that could otherwise have been used in working on core application features. This kind of deployment approach would not just cut it in today’s fast-paced economies where enterprises are looking for lean, highly-effective teams that can quickly deliver high-quality apps to the market.

The solution was to come up with deployment strategies that allowed enterprises to release and update different components frequently and seamlessly. These deployments sometimes happen very fast to meet the increasing end-user needs. For instance, a mobile app can have several deployments within a day for optimum UX needs. This is made possible with the adoption of more agile approaches such as Blue-Green or Red-Black deployments.

What Is “Blue-Green” Vs. “Red-Black” Deployments?

Blue-green and red-black deployments are fail-safe, immutable deployment processes for cloud-native applications and virtualized or containerized services — ideally, Blue-Green” Vs. “Red-Black” Deployments are identical and are designed to reduce application downtime and minimize risks by running 2 identical production environments.

Unlike the traditional approach where engineers fix the failed features by deploying an older stable version of the application, the Blue-green or the red-black deployment approach is super-agile, more scalable, and is highly automated so that bugs and updates are done seamlessly.

 “Blue-Green” Vs. “Red-Black”. What’s the difference?

Both Blue-Green and Red-Black deployments represent similar concepts in the sense that they both apply to the automatable cloud or containerized services such as a web service or SaaS Systems. Ideally, once the dev team has made an update or an upgrade, the release team will create two mirror production environments with identical sets of hardware and route traffic to one of the environments while they test the other idle environment.

So, what is the difference between the two?

The only difference lies in the amount of traffic routed to the live and idle environments.

In Red-Black redeployment, the new release is deployed to the red-environment while maintaining the traffic to the black-environment. All smoke test for functionality and performance can be run on the black environment without affecting how the end-user is using the system. When the new updates have been confirmed to be working properly, and the black version is fully operational, the traffic is then moved to the new environment by simply changing the router configuration from red to black. This ensures near-zero downtime with the latest release.

This is similar to blue-green deployments. Only that, with blue/green deployments, it is possible for both environments to get requests at the same time temporarily through load-balancing, unlike the red/black deployment, where only one version can get traffic at any given time. This means that in blue-green deployments, enterprises can release the new version of the application to a select group of users to test and give feedback before the system goes live for all users.

Pros Of “Blue-Green” Or “Red-Black” Deployments

You can roll back the traffic to the still-operating environment seamlessly and with near-zero user disruptions.

  • Reduced downtime
  • Allows teams to test their disaster recovery procedures in a live production environment
  • Less Risky since test teams can run full regression testing before releasing the new versions
  • Since the two codes are already loaded on the mirror environments and traffic to the live site is unaffected, test and release teams have no time pressures to push the release of the new code.

Cons of Blue-Green and Red-Black Deployments

  • It requires infrastructure to carry out Blue-Green deployments.
  • It can lead to significant downtime if you’re running hybrid microservice apps and some traditional apps together.
  • Database dependent: Database migrations are sometimes tricky and would need to be migrated alongside the app deployment
  • It is difficult to run at large scale

There you have it!

Is a Multi-Cloud Strategy A Fit For your Enterprise?

Enterprises and cloud computing become more integrated and essential for gain or maintain a competitive advantage through big data and Analytics. Cloud is now essential in improving operations efficiency and synergy. To optimize the enterprise architecture with the cloud, there are a few strategic questions need to be considered;

  • First, how much cloud business does your enterprise need?
  • And, what cloud strategy best meets your enterprise operational and security needs?
  • Where do private, public clouds, or hybrid cloud fit in your enterprise’s information workload deployment strategy?
  • Does multi-cloud fit in the enterprise’s information workload deployment strategy?

What is A Multi-cloud Strategy?

This probably is the point where the narrative should introduce the principle of multi-cloud. A multi-cloud is an approach to cloud computing which seeks to optimize enterprise costs, Return-On-Investment (ROI), and enabling big data analytics, which is already evolving the information workload deployment strategy of many organizations. Multi-cloud has already affected the major software and Software-As-A-Service (SaaS) providers, which have been rapidly evolving their application suites to enable this new reality.  As recently as this week, IBM announced that they had moved its Cloud-native software architecture.

Is It Time To Consider A Multi-Cloud Strategy For Your Enterprise?

Multi-cloud is a cloud computing strategy seeks to align from different cloud providers capability to optimize different business operations and technical requirements. A multi-cloud strategy can be a way to reduce the dependence upon more traditional software vendors and or on a single cloud service provider.

Advantages Of A Multi-Cloud Strategy

The advantages of a multi-cloud enterprise information workload deployment strategy are:

  • the enterprise can still operate even if one or more of the clouds providers goes offline or encounter other difficulties.
  • enterprises can avoid vendor lock-in since the enterprise’s data is stored on different clouds service providers and could be migrated if need be.
  • Multi-cloud can provide a reduction in the scales of data breach vulnerability since breaching one cloud does not provide access to the entire data of your enterprise, even if your organization has not implemented hybrid-cloud (private/public) strategy because all the data simply isn’t all housed one cloud.
  • Importantly, multi-cloud solutions are customizable. Every enterprise can select what works best in order to achieve optimal efficiency.

Disadvantages Of The Multi-Cloud

The multi-cloud enterprise information workload deployment strategy has downsides as well. For instance:

  • integration across the multi-cloud providers may require more planning, relationship management, and strategic oversight.
  • Multi-cloud implementations, while reducing the potential scale of any one security breach, it does provide more than one potential breach point to be monitored, managed, and mitigated.

Conclusion

Based on your enterprise’s industry, use of big data technologies, information security needs and the use information analytics to gain or maintain a competitive advantage and or comparative advantage, a multi-cloud enterprise information workload deployment strategy has a place in optimizing your enterprises technical and information strategy.  Especially when your multi-cloud strategy includes a hybrid-cloud (public/private) as a major pillar in your cloud strategy. 

Related References