Quantcast
Channel: datapower Articles / Blogs / Perficient
Viewing all 44 articles
Browse latest View live

Securing REST services with DataPower

$
0
0

In a recent post we introduced the REST service as an integration option growing in popularity.

Use cases for this type of integration typically include exposing REST services for mobile app consumption and for cloud based/3rd party integrations.  The question quickly arises – how can I secure these services?  SOAP services have multiple well-defined standards around security such as WS-Security, SAML and even XML dig-sig.  Since REST is based on HTTP, it can support basic encryption (SSL) and authentication (BasicAuth), but many enterprise level applications require more flexible and comprehensive security solutions.

Enter DataPower, an appliance built specifically for web services deployments, governance, light integrations and hardened security in a single “drop-in” box.  Adding DataPower as a reverse proxy to your REST services in your DMZ you immeditely get the following security benefits:

  • Fine grained Au/Az
  • Data Validation
  • SQL injection
  • Throttling

AAA node

The root of DataPower’s security is the AAA node; a flexible security processor that can extract a variety of tokens, authenticate and authorize those tokens against a variety of PDP (Policy Decision Points) and, if required, convert the tokens to different formats for down-stream processing.  A typical REST use-case is one where an HTTP Basic-Auth header is sent with the request, validated and authorized against a control server (LDAP for example), a security token transform if required (HTTP Basic-Auth to SAML token), and then the request forwarded to the actual service. With the flexibility of the AAA node each component can be interchanged – OAuth tokens, Tivoli control servers, and more.

JSON-XSD validation including SQL injection

REST services have moved away from XML documents to JSON for it’s easy of client based manipulations.  The down side is you are unable to define the data structure of JSON akin to an XSD.  DataPower however has a JSON-JSONx built in parser where it takes a valid JSON structure and output’s XML.  One can then use a well-defined XSD to validate the data and protect against SQL injections using the in-built filter.  A side benefit is you can also transform the JSONx structure to SOAP i.e. DataPower provides a REST/JSON interface to SOAP services.

Rate Limiters/Monitors

Any enterprise exposing services to the internet must protect against the most basic of attacks – a DOS attack.  DataPower itself has the ability to throttle requests from different clients based on configurable parameters.  For example one can restrict clients to 100 transactions per minute but if exceeded can be throttled or an alerting sub-system called.

Securing REST services unlike SOAP can be a challenge due to a lack of standards. However, using a tool like DataPower an enterprise can build flexible security gateways.

References:

Comment lines: Robert Peterson: High value features of WebSphere DataPower SOA Appliances that you’re probably not using


Big Data’s Challenges

$
0
0

Just returning from IBM’s Information On-Demand 2012 conference in Vegas last week, where there were just as many new questions created as ones that were answered. Among the usual Vegas question like; What happened to my money and Where am I, were some new ones. Most common was; What is Big Data? So it appears there’s still some market education needed to clarify the definition. Whenever that question arose, the conversation always seemed to come around ultimately to what the challenges are to managing high volume data streams. One Perficient client I spoke to, a large Southern California utility, is dealing with massive influx of new data streams from their Smart Meter/Grid deployment projects. As we talked I was struck by how immense the volume of information was, how much was being discarded, and how much potential their was for the data – Good & Bad. Clearly the challenges of big data are real. First off, definitions are as diverse as opinions. Most organizations don’t differentiate “big data” from traditional data. In fact, in recent study done by Information week nearly 90%of respondents surveyed use conventional databases as the primary means of handling data. With the help of the Information Week research, hopefully we can better understand what constitutes big data (it’s not just size) and the challenges it poses.

The Information Week survey revealed that the top big data sources were financial transactions, email, imaging data, Web logs, and Internet text and documents–all common data sources. It’s clear, you don’t need to be a massive utility company deploying smart grid technology to be inundated with huge volumes of data, and if it isn’t a challenge for you now, it will be very soon. Any business creating large data sets will need to imbed big data management practices and the right tools and architectures, or they won’t be able to effectively use the information collected.

So what is big data? It’s more than just volume. Generally four elements are required to qualify as big data. The first is the size; 30 TB is a good starting point. Second is type of data. Big data involves several types—structured, unstructured and semistructured. Third is latency. Big data changes fast and creates new data that needs to be analyzed quickly. Fourth is complexity. Characteristics of complex data include large single log files, sparse data and inconsistent data.

Now that we’re zeroing in on the definition and structure, or lack there of for these growing forms of data, the next question is do you have a strategy in place to deal with it differently then you deal with more traditional forms of data. According to Information Weeks research of over 200 technology leaders over half said “NO”, which likely means that if you’re reading this you probably don’t either. Don’t worry though, you’re not alone. 87% of respondents are still using databases as the primary method to handle data.

 

 

 

 

 

 

 

Complicating the management challenges to big data are the various approaches to managing the data based on sources and structure. The stream processing approach involves almost every aspect of computing, including processing ability, network throughput, storage and visualization. The majority of the Information Week survey participants expressed concerns about access to data, storage and analytics when it comes to this approach. Most were divided between those that need real-time processing of big data and those that don’t. Real-time processing can be a challenge with big data, especially in dynamic data environments. The batch processing approach  to big data is designed to manage information as it grows and expands over time. Organizations that deal with this type of data are turning to the Hadoop model and software to rapidly process significant amounts of data. Hadoop is being used for some very big implementations. According to Information Week, Facebook was the largest Hadoop deployment in the world with more than 20 PB of storage. By March of 2012, it had grown to 30 PB—3,000 times, the size of the Library of Congress. There are two problems in using Hadoop . First, you don’t get partial answers. You have to wait, sometimes a long time, for the entire batch to finish. Second, it can require a lot of hardware, because all data is processed at once. Which means any change in data requires the entire batch to be rerun. The only way to deal with this is to apply more hardware, which can be costly.

Besides the various management approaches and inconsistent market definitions, there are some other hurdles that companies should be on the look out for. According to Information Week’s research, almost half of the participants 44% indicated that they lacked the knowledge needed to implement and manage big data solutions. More than half 57% noted budget as the biggest barrier.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

With traditional forms of data management rapidly approaching capacity due to the deluge of new forms of information and sources, the market is approaching a looming crossroads. More and more business will be faced with a lack of knowledge resources needed to tap the vast wealth of information available to them. The tools are there and the information is clearly there for the taking. Organizations willing and able to invest in big data resources will eventually gain a greater competitive advantage over those that don’t.

With so much at stake with such a complex solution set, companies will be looking to eliminate as much risk as possible from these projects. Learning from peers, listening to analyst insights, understanding costs and accessing the best services, mentoring, and training solutions is a critical prerequisite to project and on-going management success of big data. Doing it right the first time means tapping into providers that have experience in a wide range of technology options for big data. Perficient’s diverse range of technology partnerships, and extensive training capacities are leading many earlier adopters to Trust us with their company’s most precious resource, information. Our industry experience and partnership awards are testaments to our delivery quality.

Several of the research points on big date made here are pulled from Information Week’s “Big Data Management Challenge” research report from April of 2012. The report is available for free and is an interesting read for anyone looking to understand more about big data.

Download the report

Free-Hadoop for Dummies An IBM Platform Computing guide

$
0
0

Hadoop for Dummies is now available!

This free eBook is packed with everything you need to know about Hadoop analytics, this handy guide provides you with a solid understanding of the critical big data concepts and trends, and suggests ways for you to revolutionize your business operations through the implementation of cost-effective, high performance Hadoop technology.

In the age of “big data,” it’s essential for any organization to know how to analyze and manage their ever increasing stores of information.

Access the eBook HERE

In addition to the free eBook you can also access Information Weeks most recent Big Data research report. The report includes insights from surveys of over 200 CIOs in North America.

Access the Information Week Big Data report HERE

Perficient & IBM – Making an Impact in 2013

$
0
0

Perficient is Showcasing Our Award-Winning IBM SOA, BPM, and Mobile Solutions & Services at IBM’s Impact 2013 Conference

Perficient is a gold sponsor of Impact 2013, and our experts are available in Booth G9 to discuss how customers can implement impactful technology solutions including BPM, service-oriented architecture (SOA), cloud, mobile and WebSphere solutions for BPMS-Practicemajor industries like healthcare, financial services, automotive and retail. Visitors to the booth can learn how Perficient’s IBM solutions and its technology offerings help companies align business and IT objectives, establish optimal business processes, accelerate productivity and reduce costs.


Watch live streaming video from ibmimpact at livestream.com

Perficient is highlighting our IBM Mobility Practice at IBM’s Impact 2013 conference, showcasing how companies, including retailers, can transform their business processes to enable multichannel and mobile commerce initiatives.

Perficient and our clients Target, TBC and Monsanto are demonstrating BPM, SOA, mobile and cloud capabilities in the following sessions:

Learn more about Perficient and our IBM solution capabilities at Impact – www.perficient.com/impact

Get live updates during the events, follow Perficient experts via Twitter @Perficient_IBM and follow the Perficient IBM Technologies blog.

Learn How Mission Health is Using Data to Improve Quality of Care

$
0
0

With Perficient’s help Mission Health is leveraging IBM’s master data management (MDM) to build a scalable, accurate foundation around its most critical data to support critical business processes across the enterprise – information about patients, providers, facilities, organizations, employees and more.

Watch this webcast and learn how Mission Health is using Big Data and Analytics to improve quality of care, meet new regulatory compliance standards and manage payment reform. Learn how you can establish an accurate, trusted view of your most critical information assets.

  • Understand how MDM provides a 360 degree view of patients across the health system

  • Connect disparate registration, ambulatory, and clinical systems to patient records

  • How MDM impacts initiatives such as patient domain, provider domain, analytics programs, Big Data and more

  • The role MDM plays in addressing Meaningful Use and Accountable Care compliance objectives

Mission Health Vid

Business Insight Requires Vision and Analytics

$
0
0

IBM-Bus-Analy-Sol-Info-Man-2012

Perficient’s award-wining IBM Business Analytics practice is a Gold sponsor of Vision 2013. During the conference our subject matter and industry experts will be on-hand to discuss how Perficient helps our clients leverage accurate, timely and integrated information to transform information into actionable intelligence that provides insight, drives planning and improves performance. Our experience with IBM’s Analytical solutions, combined with our industry knowledge, help organizations improve decision making and become more agile.

DOWNLOAD OUR VISION 2013 SOLUTION OFFERING BRIEF

Perficient delivers the following benefits when deploying solutions to the enterprise:Competency_2012

  • Cross IBM integration with IBM Smarter Commerce, Social Business and WebSphere
  • Performance & Analytics Strategy and Roadmaps
  • Financial Statement Reporting and Consolidations
  • Management Reporting
  • Planning, Budgeting and Forecasting
  • Master Data Management
  • Data Integration
  • Software Support and Renewal Sales and Services
  • Training and Mentoring

Attending Vision 2013 in Orlando? Find out what our clients have to say about their Analytics projects

Attend Perficient’s Vision Breakout Sessions

Cooking up Savings with Cognos
Speaker: Tim Dungan, Lone Star Steakhouse|Texas Land & Cattle SteakHouse| Firefly Kitchen & Bar
Abstract: In this session you’ll learn how the finance team at Macaroni Grill incorporated operational data to meet the needs of its line-of-business leaders and bridge the divide between finance and operations.
When: Tue, 21/May, 04:20 PM – 05:20 PM
Where: JW Marriott – Segura 5

Planning for the Future at San Diego Gas and Electric-Managing Capital Projects
Speaker(s): Michael Schwing, San Diego Gas and Electric; Robert Hardin, Perficient
Abstract: Attend this session to learn how SDG&E is using IBM Cognos TM1, integrated with SAP for actual results. In addition, you’ll hear how IBM Cognos Business Intelligence dashboards provides an executive view of projects and Cognos Mobile dashboards allow teams to interact with data while in the field.
When:Mon, 20/May, 03:45 PM – 04:45 PM
Where: JW Marriott – Del Lago 4

In addition to Perficient’s domain expertise we have IBM Industry Authorized solutions in Healthcare and Retail:

HEALTHCARE ANALYTICS
Healthcare Analytics offers a simple and fundamentally new approach to healthcare business intelligence. With unprecedented business value and more than 600 pre-built measures and Key Performance Indicators, Health BI offers healthcare organizations accelerated compliance to Meaningful Use and ACO quality reporting requirements by using state-of-the-art IBM BI and analytics tools to enhance clinical decision support.

Click here to view the embedded video.

RETAIL ANALYTICSRetail Pathway Screenshot
three-time award-winning retail solution, Retail Pathways is a pre-packaged data mart, reporting and analytics model that helps retailers measure KPIs with a technical framework around key analytical components. Our Adaptive Analytical Framework takes this one step further by dynamically updating the ETL framework as your retail environment evolves.

 

 

Attention Shoppers, Blue Light Specials, and Big Data

$
0
0

Remember the days when you’d hear that on the in-stores announcement? “Attention Shoppers! For the next hour K-Mart is now offering a blue light special – half off all summer beach wear.” Those announcement where a form of on-site marketing offers made only to the stores patrons shopping at the time. The offers were made to everyone in the store at the time regardless of whether you were there to purchase Beach wear or not.

Today in-store analytics, mobile technology, and social media are taking the idea of the Blue light special to new levels, and in the process causing some privacy concerns. The proliferation of mobile, RFID, and analytical tools are giving retailers the ability to identify when patrons enter, where they go inside, and see what they are searching for or taking to the changing room for example. Companies like Lowe’s are allowing customers to interact with their purchasing history via portals such “MyLowe’s”, others like Nordstrom’s are using these technologies with even greater innovation, by tracking where and when customers move inside their stores and sending them targeted offers via the customers mobile device. Think of it as a Blue light special just for you, based on your previous purchasing history and privacy preferences. All sorts of retailers — including national chains, like Family Dollar, Cabela’s and Mothercare, a British company, and specialty stores like Benetton and Warby Parker — are testing these technologies and using them to decide on matters like changing store layouts and offering customized coupons.

As competitive pressures rise and profit margins narrow in retail, we’ll see more companies looking for ways to increase consumer brand loyalty, and looking to get the most revenue possible from every store visit a patron makes. Online retailers are already using these solutions. Now brick-and-mortar are learning from the Amazon’s of the world and seeing just how far they can go. The following video from a recent New York Times article is a great overview of where the industry is going and the challenges they’ll face with consumers along the way.

Retail Analytics Blog Graphic

Using Cognos Analytics to Improve Quality of Healthcare

$
0
0

The healthcare industry pressures and technology maturity are starting to converge, and as they do the organizations that are equipped to capture data, integrate and analyze it from multiple systems will be able to generate greater insights that will drive a shift from volume-based to value-driven healthcare that improves consumer engagement and care delivery.

IBM and Perficient have collaborated on our Healthcare Analytics QuickStart solution that enables healthcare organizations to quickly and cost-effectively deploy an analytics solution that provides near and long term value by addressing Accountable Care compliance reporting and lays the foundation for future analytics and big data initiatives.

View the Healthcare Analytics QuickStart Demo Now!

Download the Perficient QuickStart solution brief.

View a recording of our recent client case study webcast featuring Catholic Health Partners and learn how analytics is being used to measure and monitor performance and provide service-line directors and financial administrators with reporting and analysis that enhances clinical care processes and business operations.

CHP Webcast


Cognos Version 8-8.4 No Longer Supported! Now What?

$
0
0

Effective September 30th If your organization is currently running IBM Cognos Software version 8.0-8.4 these versions will soon be retired and no longer maintained by IBM. This means your production issues and bug fixes will no longer be supported for these following versions:C10 Upgrade Lit

  • Cognos Business Intelligence V8.4.1
  • Cognos Data Manager V8.4.1
  • Cognos Business Intelligence Analysis V8.4.1
  • Cognos Business Intelligence Reporting V8.4.1
  • Cognos Mobile V8.4.1
  • Cognos Analysis for Microsoft Excel V8.4.1
  • Cognos Metrics Manager V8.4.1
  • Cognos Planning V8.4.1
  • Cognos Business Intelligence PowerPlay V8.4.1

The replacements for these versions is IBM Cognos V10 and IBM Cognos Planning V10. For organizations looking to fully benefit from the most current architecture and functionality of IBM Cognos V10, the process requires a thorough understanding of the differences between the two platforms and consideration of the strategy and plans for your upgrade.

Typical questions that arise regarding Cognos8 to V10 migration are:

  • Based on your current version what are your license costs and discounts?
  • Which applications should you upgrade first?
  • Is your upgrade plan documented and aligned with business priorities?
  • Are you taking advantage of lessons learned from other V10 upgrade projects?
  • Do you have an issues, resolution, and go-live support plan?

Don’t put your Cognos investment at risk! Learn how Perficient can help you successfully migrate to Cognos V10 and benefit from the most current architecture and functionality. Perficient has developed our Cognos10 Migration QuickStart, which is a services and software bundle designed to get you up and running on your most essential applications. Then build a plan to migrate the rest of our Cognos8 applications. Find out if our QuickStart meets your needs by scheduling a Cognos10 migration assessment. The assessment provides preliminary insight into your upgrade readiness and essential input for planning your upgrade. The assessment delivers:

  • Insights into upgrade complexity and readiness.
  • Upgrade options and alternatives.
  • Determines your business priorities and the most appropriate upgrade path for your organization.
  • A high-level estimate to determine the investments in time and resources needed to support your upgrade.

Request our Cognos10 Migration QuickStart Solution Brief
[contact-form]

Mobile-Big Data-Predictive Analytics-Social Media & the US Open

$
0
0

So what do all of these technology solutions have to do with the US Open? Behind the scenes, IBM has helped run the show since 1990. A recent online article shows how IBM is using these tools to flow information far beyond match scores. Handheld devices used courtside feed multiple data points such as ball speeds from each match into the system, where they hit a database that’s accessible to announcers broadcasting the US Open and IBMgames, and reporters. The system also stores historical data, allowing fans and media to compare players based on previous performance and shows head-to-head player matchups, historical video with social media for a richer experience for the tennis fans. The platform also pulls in social data, gauging the volume of posts about certain players and matches, then uses predictive analytics to estimate the potential interest in programming around them.

Read - Inside the IBM-powered Command Center at the Annual Tennis Mecca

How to install XI52 Virtual Appliance

$
0
0

One of the interesting IBM offerings in the suite of middle-ware products are the Datapower XI52 and XC10 virtual appliance. Combining these technologies with a few Openware products and you have the foundation for a nice small scale enterprise environment in which you can experiment and test a variety of configuration solutions.

How to install XI52 Virtual ApplianceWith this idea in mind, I recently set out to build a small laboratory environment using a virtualized XI52 as an ESB, a virtualized XC10 as caching solution, WSMQ, FileZilla FTP Server, and a virtualized Oracle database, along with a few other technologies that we often encounter in the wild. In this first article I’ve shared a “how-to” guide that can be used to setup the ESB and caching environments on a laptop, as foundational middle-ware components for the infrastructure.

This is the first step of building a lab that can certainly be extended, and used to emulate several enterprise Use Cases scenarios.

As an outcome of this exercise, you’ll have all of what you need to setup and teardown several XI52 and XC10 instances. And if you’re interested to see how this environment can be used, then follow along as we add various components to our mini enterprise lab.

For step-by-step instructions, download the document here.

Simplify Development = Embrace Patterns

$
0
0

I’m a BIG believer in two things: a) work smarter, not harder; 2) keep things simple — avoid complexity. Complex leads to complicated, complicated leads to misunderstanding, misunderstanding leads to chaos. Nonetheless, my guidelines seem easy enough, right? Well, good news then, they are simple and easy to apply in just about anything we face day to day — except in golf. Ah yes, the wonderful world of golf [which currently holds me hostage]. The place where complex instructions, ideas and or suggestions have license to run rampant — like a wild herd of buffalo feasting on the open range. If anyone tells you differently about golf, run. I started hacking away on fairways and driving ranges across this great country 28 years ago. It’s only recently that I finally feel like a hero when I play. But that feeling came at a high price. Golf IS a complex sport to learn, practice and or play. It requires the highest of all levels in humility and stick-to-itiveness. Love that word, stick-to-itiveness [defined as: the quality that allows someone to continue trying to do something even though it is difficult or unpleasant]. But I digress.

cupcake

Let’s get back to keeping things simple in the blog. What is a pattern, as it applies to WebSphere Message Broker [WMB] and or IIBv9? In short, a pattern is a reusable solution that encapsulates a tested approach to solving a common architecture, design, or deployment task in a particular context. Ever hear of the television show Cupcakes Wars on Food Network? Each show consists of cupcake bakers battling to see who will reign supreme as cupcake baker of the week. In the final bake-off, bakers are required to bake 1,000 cupcakes each. To accomplish this, bakers use commercial cupcake pans, like the one shown to the right. So, how does a cupcake pan apply to patterns in WMB or IIBv9? Easy, the pan is my pattern. The pattern ensures that each artifact [message flows in WMB or IIBv9] are created the same, like a cupcake. Are you now thinking “…what about customization Jason?” Easy, what kind of cupcake does your client like? Vanilla, chocolate or strawberry? Does it need sprinkles and frosting too? In case you’re curious, my fav cupcake is a vanilla, maple syrup and bacon concoction.

According to the IBM Infocenter on Patterns, it states:

A pattern captures a tested solution to a commonly recurring problem, addressing the objectives that you want to achieve. The specification of a pattern describes the problem that is being addressed, why the problem is important (the value statement), and any constraints for the solution. Patterns typically emerge from common usage and the application of a particular product or technology.

A WebSphere® Message Broker pattern can be used to generate customized solutions to a recurring problem in an efficient way. WebSphere Message Broker patterns are provided to encourage the adoption of preferred techniques in message flow design, to produce efficient and reliable flows. Patterns provide the following benefits:
  • Gives you guidance for the implementation of solutions
  • Increase development efficiency, because resources are generated from a set of predefined templates
  • Result in higher quality solutions, through reuse of assets and common implementation of programming approaches, such as error handling and logging

My last engagement was healthcare, a HL7 implementation and conversion. Early on in the project, the team decided to leverage the power of pattern development within WMB. Why did we do this? The answer, there were 300+ interfaces required. Remember that commercial cupcake pan, make more sense now? During the initial phase of the project, the majority of our time was spent developing our pattern [the cupcake pan]. Once it was completed, developers could then work independently and simultaneously. Since we created the pattern, the development process was simplified. Step 1, assign an interface to a developer. Step 2, instantiate the pattern into his or he workspace. Step 3, apply the customization [flavor, frosting and sprinkles] and Viola! In short, the next time you’re faced with developing multiple interfaces for a client, consider employing patterns to ease the process. Now, if I could only come up with a reliable pattern for my driver tee shots. They’re either a hero bomb 290+ down the pipe, or wayward left. Looks like I need to take my own advice and simplify things…

Great video here that takes you step by step in IIBv9 using patterns — 10+ minutes

http://www.youtube.com/watch?v=C-m2nF4Fk8E

View IBM’s Redbook Patterns: SOA Design Using WebSphere Message Broker and WebSphere ESB

http://www.redbooks.ibm.com/redbooks/pdfs/sg247369.pdf

A SOA Journey Using the TOGAF ADM

$
0
0

In a recent blog posting, we provided a guide for standing up and XI52 as an integral component to an enterprise like laboratory environment. This was the beginning several activities, which spawned an idea for a continuing series of articles around IBM SOA appliances, and the use of the The Open Group Architecture Framework (TOGAF) and its supporting Architecture Development Method (ADM). Which together are used to evolve an existing enterprise architecture. In this sense, by building the XI52 as an ESB; We have selected a key component of an infrastructure technology that will be used in an SOA based integration project. The idea now is to use the TOGAF and ADM to build out an EA that will inform and guide an integrations projects solution architecture using Datapower and other Websphere technologies.

So, by using the XI52 as an ESB, it became the genesis of a “target first” architectural context. Which has now become the starting point for a EA Preliminary phase, which we will follow with a Visioning phase, and then proceed with the execution of an EA life-cycle for this integration project.

As an short introduction I’d like to state that A priori enterprise architecture is not done in a vacuum, but rather it starts with an existing landscape of enterprise capabilities and assets. In this project, as in typical projects, we will be drawing from building blocks that were created in previous EA efforts. These are assets which make up the companies business, data, application, and technical architectures respectively. So our next steps will be to flesh out additional EA elements for the projects problem domain as we work through the ADM process, as well ultimately deliver various EA artifacts that will be utilized in architecture governance, and solution.

Before we begin let’s review some contextual  information pertaining TOGAF and the use of this framework in this effort. In the discipline or practice of Enterprise Architecture, the Open Group has developed and matured its own architecture framework known as TOGAF. Historically the framework is based largely on TI models developed for the Department of Defense. To date TOGAF has evolved with the current version being TOGAF 9.1. In a brief statement; TOGAF is a foundational framework of generic services and functions from which a set of specific architectural building blocks are created and reused.

Additionally, the framework offers a comprehensive guide and process that provides a development life-cycle approach known as the Architectural Development Method (ADM). The ADM is an iterative process with guidelines for building Enterprise Architectures (EA) for organizations. TOGAF also provides a number of IT level meta-models and Reference Models as tools that are used in the process. Each iteration covers all phases of ADM process, as well as iterations that occur within a phase. A fundamental concept of the ADM process, is that for each iteration a new decision must be made as to:

  • What is the breadth of coverage for the defined enterprise.

  • What level of detail will be expressed in terms of artifact and models.

  • Schedule in terms of iterating over the whole process, as well as the intermediate iterations.

  • What assets will be leveraged from the organizations existing architectural building blocks.

  • Architectural assets that may be external to the current organization. Which may include other frameworks, systems, or assets from vertical industry models.

In our future series of articles we intent to flesh out varying levels details of framework as well as the artifacts created in the ADM process. But in this first article, we’ll take the the proverbial 20 thousand foot view. TOGAF is comprised of six sections; along with the ADM, an Enterprise Continuum which is a repository of architecture information and building blocks, which also includes what is called the architecture continuum. Which in turn is a related set of solutions that will range from industry common solutions to more organizationally specific solutions; Then there is the Architecture Content Meta Model, which describes TOGAF viewpoints and their associated artifacts. TOGAF also provides two Reference Models, the Technical Reference Model and an Information Reference Model or what is referred to as (3IRM). These Models makeup what is called the Foundation Architecture of the Architecture Continuum.

Next, the Reference Models and ADM guidelines, tools and techniques, comprise a set of best practices for an Architecture Capability Framework, that is used as method for assessing the readiness of an organizations EA practices and related governance models. Now, in our our particular area of focus, we will be executing the ADM life-cycle for an integration project, which will be designed around an SOA architectural style.

Over the course of these articles, we will build out a reference  implementation of our architecture using various aspects of framework and process. Along the way we will also reference the Banking Industry Architecture Network (BAIN) framework, and we will also tie in the IBM Websphere Reference Model, tailoring the deliverables with a particular emphasis on IBM’s Datapower SOA appliances.

So, in using the ADM as a guideline, we have our comprehensive approach for planning, building, governing, and maintaining an architecture, as well as providing a means for maturing the organizations services development practice and systems integration capabilities. This is just one noteworthy aspect of the ADM. In that it provides us with a continuous improvement approach, by incorporating change management, along with the various techniques, tools and procedures. Which are also enablers in determining when to modify or rebuild an EA, in response to changing business and technological requirements.

Figure 1. TOGAF ADM

The project is a Portal integration for FYI Bank. Well get into the project details in the next article, where we will provide some additional context. Such as Drivers, Goals, Objectives and Requirements. And in each article we will build and add to various artifacts.

So, lets get started with a project package structure that is aligned to each phase of the ADM. In subsequent blog articles we will flesh out the details for each work package . The goal! Well, by the time we reach phase E., which is Opportunities and Solutions; we will have a comprehensive set of  artifacts to transition into the solution architecture. Which means within each work package we will include various architectural deliverable’s.

Figure 2. Portal Integration Project – ADM Process and Work Packages

Along the way we will also provide concrete examples of how one might include multiple projects within the structure of an ADM process model, using some 3rd party EA modeling tools.  So, welcome to the journey and I hope you will find it both informative and interesting! I’m also looking forward to your comments and questions on the this topic, as well as any suggestions on what you would like to see.

A SOA Journey Using the TOGAF ADM – Part 2

$
0
0

In part 1 of this series I provided a ten thousand foot view of the project relative to the TOGAF ADM. In this post I’ll start with the proverbial peeling back of the onion. Using an integration project FYI Bank has made a strategic decision to address the challenge of their silos of IT systems by separating out their processes into business functions. One eerily business driver is the need for FYI Bank to improve business agility to respond to regulatory and market changes. The goal is to move the enterprise towards an infrastructure of building blocks of predefined services. To achieve this business executives of FYI Bank have also decided that a SOA strategy will best help meet this goal. The Bank’s executive steering committee has determined, that an attempt to implement a common enterprise architecture approach across the entire financial system would take a significant amount of time and effort, but they also believe that it is an achievable goal. Especially if the business organizations adopt a shared set of business and IT standards to ensure that a flexible SOA is possible. The steering committee has also approved the Lending Services organizations proposal for a Loan Application Portal Integration project,  which is the pilot project for the SOA initiative. Additionally, FYI Banks Infrastructure Design Authority has selected to use IBM’s WebSphere Datapower SOA appliances as a new enterprise platform services capability. The enterprise architecture team has been engaged to support this organizational change to SOA and the introduction of this new technology component.

As the assigned architect responding to the business request for EA, I’ve identified some preliminary EA objectives around SOA governance. Now I intend to provide several model representation that will both inform and reflect architectural decisions going forward, as well as provide a new SOA governance capability around the architectural decision processes. One set of objectives is that the governance model will be developed incrementally, and will provide a reasonable set of constraints and compliance checks in the development of the business capability, and solution architecture. The measure for these objects has not been fully identified at this time. So the model representations that follow in this article are parts of what will be a comprehensive look across the architectural continuum. I’d like to encourage the reader to take a some time to review the various models. Their purpose is to communicate architectural decisions, by presenting various model views and perspectives in TOGAF and UML notations as well as observing the syntax of these notations. Which is an important part of doing architecture development..

With that statement in mind I’d like to bring up the subject of architectural modeling and tools. While it may feel a bit disjointed from the flow, I believe that it is an important aspect of the article, since I’ve already mentioned that you will be seeing several modeling representation, and more importantly these representations are guides for doing architecture.

Over the years of reading enterprise architecture blogs, and participating in various EA efforts. The topic of modeling tools has often been discussed. Sometimes it seems more like a religious debate, and I’ve read statements that a tool is somewhat superfluous to the EA tasks at hand. From my perspective I believe that a good tool is important to any craftsman, the mechanic, the carpenter, and yes enterprise architect. In order not to come across as dogmatic or bias about any particular EA tool. I feel that it’s more important to have a reasonable and specific list of requirements. I’ve taken this short list from RFP’s on previous EA projects.

…..The tool shall provide a comprehensive coverage of several modeling notations.
………..(UML 2.0, Archimate, BPMN 2.0 and TOGAF elements and stereotypes)
…..The tool shall  provide a comprehensive coverage of modeling diagrams.
………..(TOGAF, UML, Free-form)
…..The tool shall provide support for the TOGAF ADM.
…..The tool shall provide relational management of modeling elements.
………..(Supporting model element integrity, and object reuse)
…..The tool must provide a content repository, preferably relational database.
…..The tool shall provide a means of publishing the model as Web content.
………..(Not necessarily providing a proprietary Web application client)
…..The tool should provide constraints and guidance on modeling syntax.
…..The tool should provide a flexible means of structuring, packaging and ordering of model elements.

The importance of this short list of requirements will become even more apparent as we transition through the ADM phases, but at this point I have another objective, which is to build a repository of reusable and relate-able artifacts that I will be able to use across the ADM iteration and the development life-cycle of the project. Another objective is to elevate our modeling representations above a academic, and Marka-tecture presentation approach, and use a rich set of notations that go beyond the notional Box & Line diagrams. All of which tend to be pasted into word documents and templates, that become stale soon after their release. In my thinking EA model representations should be iterative, dynamic, near real-time, and reference-able and reviewable by a larger audience at any time; preferably made available via a Web based reporting capability. Because ultimately the goal of the EA is to provide more that just architectural representations. The EA should also use the TOGAF ADM as a tool to guide the development of  a set of prioritized and aligned objectives, and to provide the means for continually evaluating the understand the organization and its architectures, as well as to communicate this understanding to stakeholders, while moving the organization forward to its desired state.

So as a way communicating the FYI Bank Enterprise model. I have structured a repository as work packages that follow the TOGAF framework. In this regard a work package is a container that holds model representations as artifacts used in the implementation of the SOA governance capability. As the naming convention I will use the TOGAF phases as the parent work package. These will contain viewpoint packages, which are containers for model views. Model elements have will have meta-data attributes which will be present viable at some level in the fleshed out the designs. I will also provide a comprehensive view of the models published as Web documents. To give you a little more context I’ve included the following textual representation. The section labeled Model Diagrams  represents a combination of TOGAF and UML model diagrams that are both reference-able and reusable as project artifacts. These are generally positioned at the package root level as they often provide different perspectives of the work package content. Again, I will post the evolving model as Web documents in upcoming blogs.


A-Package.png FYI Bank Enterprise
EA Context Model View

A-Package.png Requirements Management
……..A-Package.png Enterprise Architecture Requirements Viewpoints
……..A-Package.png Project Level EA Requirement Viewpoints
…………….A-Package.png Portal Integration EA Viewpoints
A-Package.png Preliminary
……..A-Package.png Content Meta-Model Viewpoints
……..A-Package.png Logical Layer Viewpoints
……..A-Package.png Principles Viewpoints
……..A-Package.png SOA RA Meta-Model Viewpoints

Note: Model Diagrams

Repository Content Model View – Loan Application Portal Integration Project
Project Structure View
Content Meta-Model View
Principles Model Domain View
EA Principles Model View

A-Package.png A. Architecture Vision
A-Package.png B. Business Architecture
A-Package.png C. Information Systems Architecture
A-Package.png D. Technology Architecture
A-Package.png E.Opportunities and Solutions
A-Package.png F. Migration Planning
A-Package.png G. Implementation Governance
A-Package.png H. Architecture Change Management

Now let’s return to the project at hand. At present the FYI Bank has an enterprise architecture practice and several components of the TOGAF framework and ADM are already in place. In previous ADM iterations the EA team has established several core principles across multiple domains. The current enterprise organizational context is fairly well understood. Business frameworks such as Portfolio and Operational Management are also in place. Now the task ahead of the team is to introduce new capabilities for a SOA style architecture, as well as building the business solution for the Loan Application Portal Integration project. To guide the creation  of the FYI Bank’s SOA governance model I’ll turn to the Open Groups SOA source book. The source book is the Open Groups collection of source materials for use by enterprise architects working with Service-Oriented Architecture. The documentation of the work being performed in this area of TOGAF can be found at; http://www.opengroup.org/soa/source-book/intro/index.htm If you have not already read the source book, you may want to take a moment to read through the introduction and become familiar with the framework of the documentation. I’d like to also note as a reminder, that I will be using both the Source Book and the TOGAF 9 framework throughout the project. The obvious implication of introducing the SOA source book  is that the Loan Application Portal Integration project now has two areas of architectural concerns. The delivery of an effective business solution, which will be based on SOA governance standards provided in the new EA capability.

crop-small-prelimRecalling from the first blog the TOGAF ADM “crop circle” we were looking forward to the Preliminary phase where I would begin establishing the “where, what, why, who, and how we do architecture” in the enterprise. Since we are dealing with a change to our architecture practice we’ll start by turning to the TOGAF 9 Architecture Capability Framework for guidance and inputs. In reviewing the categories of the capabilities framework we see that this iteration of the ADM will impact the current design of all four domains of the architecture: Business, Data, Application, and Technology. I followed this activity with a review of the UML package model diagram, which outlined the Loan Application Portal Integration project structure, so I am now informed about which work packages will also impacted. A detailed explanation TOGAF’s Architecture Capability Framework Part VII can be found at: http://pubs.opengroup.org/architecture/togaf9-doc/arch/index.html

While examining the scope of change I’ve created a new Repository Content Model View, which is an abstraction while evolving the initial context for the next set of activities. Within the Preliminary phase work package I’ve now added a work packages for our new EA SOA principles, a EA requirements work package, a content-meta model view, to the Logical Layer Viewpoint, which has been reused in from our previous EA work. I’ve also identified the new EA capability as TOGAF SOA Governance.

In developing this model artifact the scope of the architectural change becomes more evident, in particular the change will impact the roles and responsibilities of several actors; business SMEs, business architects/analysts, and the architectural concerns covered by solutions/data/security/ technical architects, as well as system and software designers. In terms of roles and responsibilities and the decision as to what will be included in this iteration is still an unknown and under consideration at this time. What is known is that there are skills that are not outlined in the EA skills matrix.

Repository Content Model View.jpgLoan Application Portal Project – Project Structure View

The next category that I will cover in the capabilities framework is Architecture Compliance. Of the two subsections of compliance I start with a focus on the function of architecture, since this area addresses project-specific views of the enterprise architecture as used throughout all phases of the ADM. Here I will introduce what I consider a first class contextual model for the EA effort; the content meta-model. While the TOGAF Framework provides several excellent representations for this model. In my humble opinion, the significance of this model cannot be underestimated. So I’ve have included a content meta-model representation as part of our repository as an entity relationship diagram. I will use this model as sort of a planning tool, as I consider what objects will be included in the iteration, and the task that will be needed to develop those objects. If I’m required at some point in the project to provide a .mpp like project plan, I will use the content-meta model to flesh out the tasks for a work breakdown structure.

So throughout the execution of the ADM this artifact will play an important role in terms of planning EA activities, as well as maturing the EA content repository as we implement new business and architecture capabilities. During a previous EA project, a business stakeholder commented that at first glance; the content meta-model seems overwhelming, potentially producing a sense of “sticker shock”, or just too much information. My explanation was, that the model may be viewed as an object oriented version of a PM work breakdown structure, or it can also inform other agile PM methodologies by providing stories and task. For example, taken as a whole, the content meta-model is the basis for architectural epic. In a future blog I will suggest how a EA efforts can be integrated into a larger scale agile approach, as an architectural epic.

Content Meta-Model.jpgContent Meta-Model View

In the next model I will layout a Domain Principles Model Viewpoint, which provides us with another contextual view. I’ll use this to contain, communicate, and manage newly added SOA principles. Again we already have reusable work packages from previous iterations of the ADM. In this preliminary phase activity I’ve added a Principles of Service Design work package, which we saw was added to the Repository Content Model View. This also gives me traceability and scope by accounting for EA activities that will be used to flesh out the details of this work package. The principles that are added to this work package become a vital part of SOA governance.

Domain Principles Model Viewpoint.jpgDomain Principles Model Viewpoint

Next I’ve started by adding several new model objects that are stereotyped <<principal>>. These elements are now part of the EA principles catalog as principle statements, along with the Rationale. These principles will be referenced and applied in the both the next phase, and future phases of the ADM. With he expectation that I will be  adding more principles to this work package.

 Service Design Principles View.jpgService Design Principles View

Given the significance of content meta-model, and the various newly added model artifacts. I’d like to suggest that we take a pause and review Part IV of the TOGAF framework. In Part 3 of the series I will build out more of the preliminary phase, by delivering the next version of the repository, I’ll link in Business Principles, Business Goals, and Business Drivers. We’ll add more details around SOA Governance, and I’ll then flesh out the details around the Architecture Definition Artifact. This will lead to a transition into the Architecture Vision Phase, where we will take up the decision to introduce IBM® WebSphere® DataPower® Integration Appliance as a strategic technology enabler for SOA.

A SOA Journey Using the TOGAF ADM – Part 3

$
0
0

In this article I address some of the the final elements and models in this iteration of the Preliminary phase. At this point I would like to comment about the ADM as an iterative process. In regards to Architectural Definition which includes an ideation / inception iteration. At this point in the role of Business Architect I have taken some time to consider each phase of the ADM in preparation for the next iteration. For the most part I have started to what are the possible artifacts that will be developed in the subsequent iterations. For this I generally use a heat map styled document for planning and charting progress. This artifact provides a planning snapshot for upcoming gate reviews as the transition made between iterations. Up to this point I’ve only lightly touched on the some of the essential aspect of architecture effort with a focus on governance. It should be noted that the degree of “doneness” for any particular artifact that has been produced up till this point will vary. And because I am publishing the repository content as Web documents from the modeling tool. Stakeholders are not constrained to a particular deadline for reviewing the content of the EA repository.

This is a type of model driven approach where the repository is used for dynamically doing architecture development. This is a concept that will require time for organizational socialization, and the shift in organizational behavior may be slow. This is an aspect of the TOGAF that I opted not to cover in this series. But organizational change should not be underestimated. However, the benefits of this approach are significant in that it promotes agility as a capability as well as an organizational behavior. Mgt-HeatMap.png

One key value add for the artifact development heat map, is that is can be easily used in conjunction with agile project management methods. For example, aligning the heat map with PM methodologies provides a means for enabling the use of the architecture artifacts, and governance checkpoints, and can serve as the basis for creating architectural epics. This is also my way of creating a plan for integration points for introducing new SOA capabilities into the already established business planning, portfolio, and operations management frameworks. For example, by the end of this article I will have developed an architectural definition that will address the integration points of Architectural Direction, Structured Direction, and Architectural Governance. The heat map will provide me with measure of the degree of completeness that is needed to satisfy these framework interfaces.

Management-Frameworks.png

At this point I’m wondering if you have asked why the heat map does not specify a list of artifacts. The reason behind this is the use of a modeling tool to improve development velocity. All artifacts are produced as models. While there may come point where a document may be called for. I’ll simply use the tool to generate a model report (See: the link to the content at the end of the article). Another situation is where the tool may not provide coverage for an architectural discipline that is not expressed in TOGAF, UML, or BPNM notations. This situations will come up a little later in the article

It’s not uncommon that business drivers, and their related goals and objective are defined outside the formality of the ADM. This is the case with FYI Bank. And since context is always an important dimension of any project, I will build out a set of work packages with various Goal views. In this first model view I’ve placed the Enterprise Architecture work package at center.

Goals Viewpoints.jpg

The reason for placing the Enterprise Architecture work package at the center of this this view, is based in the the TOGAF modeling syntax, in which Drivers create Goals, and then Objectives are used to realize the stated Goals. From this construct we derive capabilities. In this view I am emphasizing that the TOGAF enterprise governance capability is a contributor to the delivery of enterprise business capabilities from the Corporate, Project and Infrastructure business domains. This is a contextual representation of how EA governance is tied to a heterogeneous part of the enterprise. It provides a model view that allows us to reason and make decisions about the governance capabilities and their impact on business segments that will be engaged in delivering the business solution. Placing the Enterprise Architecture work package is at the center of this model, show that by using the TOGAF ADM we have a comprehensive means for considering where the governance model will be implemented and used across the enterprise.

It’s from this contextual view that I take my starting point for applying various level of abstraction in considering the Drivers, Goals and Objectives for each of the identified business domains starting with Executive Management. The Executive Management work package contains a Goals tree view, that has captured the primary drivers for the SOA initiative. This view is an artifact that will be gate reviewed at the end of the the preliminary phase, and formally accepted as a baseline at the end of the Vision iteration. However, as stated earlier in this article these models along with several subsequent models are now available for informal stakeholder review and comment.

Executive Initiatives.jpg

For now I will use this model to communicate with execute leadership, and members of the executive steering committee to flesh out any uncertainties, ambiguities, missing or unexpressed drivers, or goals. I will also work with executive leadership to ensure that the stated objectives are in line with management expectations. For example, this model can now be used to point out that KPI’s for the stated objectives have not yet been identified. Therefore we have some additional work to capture decisions about the acceptance criteria and commitments before it is ready for baseline an transition to the state of Formally Accepted on the heat map.

My next steps would be to create a series of Goal tree views for each of the business domains, For example, I will also work with the other business segments and leaders, such as Marketing, The Lending Services program sponsor, and the Technology Operations Group (TOG) to ensure that the stated objectives and KPI’s against Goals are captured and reviewed. I should note that these are somewhat informal review sessions where not only can we consider gaps in understanding, but we are also able to set the stage for decisions about the scope and prioritization of objectives for the outcome of a completed ADM. These decisions are then codify in the Architecture Vision. The next step is to produce a couple of Goal tree views in the Enterprise Architecture work package.

Up to this point I have placed a great deal of my focus on Governance. While this capability is perhaps among the most vital. There are several other SOA capabilities that should be considered. I stated in the second article the executive steering committee has determined that a SOA strategy is a key goal for successfully realizing an agile business. In my humble opinion the certainly this qualifies as an open-ended statement. To narrow the focus of the EA effort I will use the same approach as with capturing and vetting business drivers and goals. Referencing the SOA capabilities found in the TOGAF SOA Source Book. I will create a EA Goal tree view with several EA objectives that will be used to deliver several new SOA capabilities.

The TOGAF Source Book states that “from a TOGAF context: capabilities are typically expressed in general and high-level terms and typically require a combination of organization, people, processes, and technology to achieve.” Using this definition I will add two new models along with a new SOA Governance Process view. First I’ll add a Goals tree view with several new enterprise architecture objectives. I’ll will then use this view of the SOA Development Goals as part of a larger EA planning exercise. In which I will review the new goals and objectives along with their related capabilities with the executive steering committee. Like any other effort to build new capabilities I will use this view as to prioritize, scope and set expectations in terms of what capabilities will be delivered this cycle of the ADM.

SOA Development Goals View.jpg

Earlier in the article I explained why a heat map does not provide a list of artifacts. Here is a concrete example. In this tree view I have called out a new capability to use six-sigma as a discipline within the BPM practice. In the next iteration I will begin work on the development of the business process views. During that activity I will then introduce the use of six-sigma and the DAMIC methodology for business process analysis. Actually, this decision was made as part of some of the “Light” work done in the ideation/inception iteration, and based in a stakeholders inquiry about development of a BPM practice. As a result this capability will has been identified in the Goal tree view. The new artifact simply appears and can be reviewed or further developed.

This is also a good example of how a capability can be introduce as new discipline along with it’s associated methodology. The takeaway here is that even a new discipline such as six-sigma is still developed within the ADM process. In the next iteration I will introduce value stream analysis as part of this new capability in supporting Portfolio Management. According to the TOGAF this would trigger a new iteration of the ADM. While the development of a BPM practice is outside the scope of this cycle of the ADM, I will touch on the subject in the next article.

For now I’ll return my focus to capturing a few more elements of the preliminary phase. It’s not unusual that business drives, and their related goals and objective are defined outside the formality of the ADM. Such is the case with FYI Bank. Since context is always an important dimension of any project, I will build out a work package view containing several Goals viewpoints.

Goals Viewpoints.jpg

Perhaps you’re wondering, or maybe not! why I’ve placed the Enterprise Architecture Goals viewpoint as the center of the model. The first reason is the modeling syntax in which Drivers create Goals, while Objectives realizes the Goals, and the deliverable is a demonstrable capability. I think there is a nuance worth noting at this point. The models that are being produced as part of the EA effort is not the architecture. They are simply a means for reasoning and making decisions. Models in this sense are not the deliverables. The deliverable is the capability, and that is the architecture.

The second reason which I feel is an important perspective. Is how the SOA Governance capability is tied to the other enterprise business capabilities. So this context view provides not only a representation of the heterogeneous nature of the enterprise. It also informs me about the business segments that will be involved in the delivery of the the new business capability. Therefore, the Enterprise Architecture work package is at the center because it represent the impact of EA capabilities across the identified business domains in the enterprise.

Starting with this view I’ll will then applying various levels of abstraction beginning with the Executive Management Goals. The Executive initiative view is an artifact that will be eventually part of a gate review at the end of the the preliminary phase. However, as stated earlier in this article these two models along with several subsequent models are now available for informal stakeholder review. And I will use these models to work with members of execute leadership, and the executive steering committee to flesh out any uncertainties, ambiguities, missing or unexpressed drivers, or goals. I will also work with leadership to ensure that the stated objectives are in line with expectations. These particular models will also be used to point out that the KPI’s for the objectives have not yet been identified. Therefore, there are some decisions and commitments that must be addressed before these views are ready for baseline to the state of Formally Accepted on the heat map.

Executive Initiatives.jpg

Next I will work with the other business segments, such as Marketing, The Lending Services program sponsor, the Technology Operations Group. These are somewhat informal review sessions where not only can we consider gaps in understanding, but also we stage for prioritization and formation of the Architecture Vision. In addition to the business Goals I will also provide a couple of views from the Enterprise Architecture work package.

Up to this point I have primarily focused on Governance. While this capability is perhaps the most vital, there are several other SOA capabilities to consider. As stated in the second article; the executive steering committee has determined that business agility can be achieved by building a SOA strategy.  In my humble opinion this certainly qualifies as an open-ended statement. So to narrow focus I will take the same approach as used when fleshing out the business goals. In the Enterprise Architecture work package I will create an EA Goal tree with several objectives. I’ll then abstract the Goals into a viewpoint and tie in several SOA capabilities that have been taken from the TOGAF SOA Source Book.

The TOGAF Source Book states that “from a TOGAF context: capabilities are typically expressed in general and high-level terms and typically require a combination of organization, people, processes, and technology to achieve.” Given this definition I will add two new models along with a new SOA Governance Process viewpoint. But first I’ll add the new Goals tree view. I’ll will then use this SOA Development Goals view as an input to an EA planning exercise. Part of the planning will be to review the new goals and their related capabilities with the executive steering committee. This will also provide an opportunity to prioritize, scope and set expectations in terms of what EA capabilities will be delivered this iteration of the ADM.

SOA Development Goals View.jpg

The next artifact is a result activity to build out a SOA Governance Process. The goal here is to integrate and or align the SOA Governance process in terms of standards, tools, and governance checkpoints with FYI Bank’s Program Portfolio Management Office, as well as with other management frameworks used in the Operations and Solution Delivery business domains. I’ll use a level 1 BPMN process diagram which allows me to include organizations and the required roles. I’ll also use the process model to introduce artifacts such as governance checkpoint documents, as well as identifying potential integration points with current or future technologies such as Lifecycle and Policy Manager Tools. Another advantage of using a BPMN process model is that I can also reason about data processing at the integrations points, which may also provide opportunities for automation.

Some examples are, a Service Definition as an input to the Service Portfolio Management, which I expect will result in an input to the Portfolio Backlog used by Solution Portfolio Management sub-process. A Service metadata message may be an automated input to a Lifecycle Manager system that is supporting the Service Lifecycle Management sub-process. Another example is that I will also be able to reason about interactions between other sub-processes that may require further investigation in terms of what specific information is needed. Such as it is between the Solution Portfolio Management and Solution Lifecycle Management, this is where an Architecture Definition (AD) is realized for Non-functional requirements; in terms of architectural principles, constraints, security, monitoring, and Key Process Indicators. There are also other aspect of the AD that I will cover a little later in the article. In these examples I’m starting at a notional level to consider process around the governance model and how it will be used between organizations, people, as well any technologies that will enable process.

SOA Governance Processes.jpg

Now I come to the final activities in this article. This will be in the area of some informal work around the Opportunities and Solutions, as well as defining the elements of the Architecture Definition. Before I address the Architecture Definition (AD) artifact, I’d like to look back to the conclusion of second blog article where there are still a couple of things yet to cover in this article. First I have not yet tied principals into the continuum, and then there was the introduction of key business decision to use IBM DataPower Integration Appliances as a strategic technology, which will be added to the Platform Service capability.

To address these concerns I’ll turn to the Architecture Definition. This artifact encompasses the rather complex meta-model that is defined as the SOA Reference Architecture Technical Standard in the TOGAF SOA source book. At the heart of this sophisticated object metamodel is the Architectural Building Block (ABB). The Architecture Definition is intended to serve as the integral component representing the ABB. Perhaps the best way to describe this artifacts is that, it acts as a container in which we instantiate the SOA Reference Architecture. What’s most unique about this artifact is that it is primarily realized as model elements in the EA content repository. Which I will published as supporting Web documents. This object diagram represents the SOA Reference Architecture Technical Standard.

ADL-Meta Model.jpg

Next I have the AD view. This is an output of the Solution Portfolio Management sub-process. Notice that the AD model has references to the Principles viewpoint work package, which is the catalog of architectural principles. So Principles will now be tied to the Solution Lifecycle Management subprocess. The next reference is to the Technical Reference Model, which is an artifact from the the Technology phase of the ADM, and is a representation of Enabling Technologies. There also a reference to the EA and Project EA Requirements work packages, these contain NFR’s, constraints and assumptions. And finally, there is a reference to a Logical Layer Viewpoint work package. This corresponds to the Layer object, I will go into greater detail in the next section.

Architectural Definition View .jpg

The final activity for this article, was to introduce the new platform service capability. For this I will provide the initial logical contextual view, which is the SOA Service Appliance Context View. This is perhaps the first of several logical architecture views. This view represents a logical perspective of the IBM SOA Appliance family in the larger solution context. I’ve provided a high level abstraction of the technical capabilities of the appliance, primarily because the solution architect may have several choices from the IBM product family. Also because I have not added the the business process viewpoints, which will drive some requirements of the solution architecture. This will undoubtedly inform the SOA appliance product selection. This activity is underway, and in the role of the Business architect I’ve also started to perform a value stream analysis as part of a Six-Sigma BPM exercise. The results of that work will be presented in the upcoming blogs.

So lets now turn to the logical context model. Essentially the SOA Service Appliance Context view also represents several objects in the meta-model. Primarily the Layer object, and the capability object which is represented in terms of the technical functionality of the SOA appliance. The Enterprise Platform Service Appliance package is a layer that represents the Solution Building Block object. The Infrastructure Services package is actually contained in the Technical Reference Architecture layer, which means that it is aligned with the Enabling Technologies. The Service Layer which is in the Business Architecture domain represents a capability object. The Business Application layer is a high level abstraction of the Portal solution. The layers within this package will be aligned the the metamodel in a later article. Finally, there is an Enterprise Information Systems layer. This layer represents Enabling Technology, Solution Building Blocks, and Capability objects in the metamodel.

SOA Service Appliance Context View.jpg

In the next blog, it’s very likely that I will refine some elements in the preliminary phase work packages in preparation of a formal gate review. I will introduce requirements, which has already been an ongoing activity. But the primary focus will be on the architectural vision. Now that I have introduced the heat map this will be a first class artifact for iteration planning. Also if time and space permits, I will introduce some ideas around a larger agile framework in which architectural epics will be presented as an integration with SCRUM like project management methodologies. While this is a bit out of scope, I thinks it is worth consideration. I will also take a deeper dive into the Business, Application, and Technology domains, with an emphasis on the business process and the information  model.

For those of you who would like to follow along by viewing the EA Model I have provided the content repository as Web Documents in a ZIP file. NOTE: This is the Content Repository in the early stage of the project. As I move through the iterations I will enrich the model content, and clean up  the repository.  Click-Here – FYI Bank EA Model


JMeter Testing for a Datapower ESB Implementation – Part 1

$
0
0

Introduction

When considering testing a Datapower implementation the first tool that is generally mentioned SoapUI. While this is a good tool for a particular aspect of testing, you may need to expand your testing capabilities to include a broader set of concerns. In this blog I’d like to consider an architectural scenario in which I will need to cover a range of architectural patterns.

The Architectural Components

Datapower deployed as a single component in in the architecture provides very little in terms of the need for a testing a solution. For this example I’ll consider the following architecture. A Datapower XI52 deployed as an ESB. Websphere MQ for protocol mediation, LADP for authentication and authorization. The client will use RESTful calls with transformation between XML and JSON. An FTP server has also been added to the scenario. O’yes the webservce is Soap based. Finally, I have two Datapower application domains, what I’ll call DEV-DEV deployed on Port 5400 and DEV deployed on Port 5420. This could also be DEV to QA or staging. This basic architectural configuration will cover the following Datapower patterns;  MQ to MQ; HTTP to MQ; MQ to HTTP; Datapower LDAP integration; Datapower FTP Integration; Many to Many Transformation; Soap WS integration; RESTfull to Soap integration;

The Test Plan

I’m in the early development phase of my project, and I need to setup some unit tests;

1. I want to place a RESTful GET call to the appliance and evaluate the Response
2. I want PUT & GET  messages from Queue’s
3. I want to be able to either a) use or b) by-pass the Datapower ESB to get a file from the FTP-Server
4. I want to call the Active Directory and get back the DN
5. Call A webservice operation using HTTP transport
6. I’d like to do some preliminary load testing

The other requirement is; I’d like to do most if not all of my unit testing from a single environment, as well as switch between the Datapower  application domains. And to add a typical twist. I have no budget for an advanced testing suite. Me thinks this is not too far off from a real-word scenario!

The Test Tool

This is where I turn to Apache JMeter. Apache JMeter™ is an open source desktop tool that is built as a 100% pure Java application. The tool provides functional behavior testing capability, and is also designed to perform load and performance testing. It was originally designed for testing Web Applications but has since expanded to other test functions. http://jmeter.apache.org/

Test Plan Configurations

My project has a formal testing strategy, so I will suggest my JMeter test plan as a starting point for QA. Generally, I setup the Test Plan using a {Project Name} Unit Test with {Version}. As the project matures I will move the test plan into CM and baseline for each release.

Shot-Test Plan Config.png

Another good practice is to set up the test plan with naming conventions that reflect the architectural design. The next task is to set up a Test Result View using JMeter’s predefined Listener components. Here I have chosen to include a Results Tree, and an Aggregation View which I will use for initial load testing results. Depending on the availability of the components in the architecture this test configuration will capture metrics for a performance baseline. The listener widgets are found in the Add Listener menu by a right click on the the root test plan. This is the general case for each of the test widgets that available in JMeter. Also the tool is context aware in terms of features available to the test plan and test threads.

Next I’ll  user a JMeter User Defined Variable widget, which is a nice feature that will enable me to easily switch between application domains. JMeter also allows you to add variables that can be inserted as part of the test thread execution, as well adding metadata to the test plan. For example, in this configuration I have setup a Service Name variable which can be used to configure port-type attributes for Soap calls. However, this capability is not limited to this scenario alone. As you will see I will create a range of variables for configurations throughout the plan and cases.

UserVars.png

Perhaps the key take away here is the use of Variable Categories. For example, the DataPower Base Appliance Variable is used  as the default settings for the appliance ip:address, port, and protocol. Which are configured as property settings.  Appliance :${_property(dp.test.address,192.170.25.xx), Port: ${_property(dp.test.port,192.170.25.xx), and Protocol: dp.test.protocal=http. I can then setup overrides for each application domain. For example, this configuration has an additional integration test environment, and a QA environment which have override values for dp.test.address, and dp.test.port. This is where I will switch between domains by simply enabling the User Defined Variable to point to the environment I desire.

UserVars2.png

Also because these variables are global they can be used across any of the test threads in you project plan. Another nice capability is that a User Defined Variable can be configured to generate custom values such as GUID’s which can be a  handy feature. Here I’ve provided is a seed java script algorithm for generating a multi-part GUID with random number and characters.

var chars = ’0123456789abcdef’.split(”); var uuid = [], rnd = Math.random, r; uuid[8] = uuid[13] = uuid[18] = uuid[23] = ‘-’; uuid[14] = ’4′; for (var i = 0; i < 36; i++) { if (!uuid[i]) {r = 0 | rnd()*16; uuid[i] = chars[(i == 19) ? (r & 0x3) | 0x8 : r & 0xf]; } }; uuid.join(”);

I will also setup a load configuration User Defined Variable. This is used for creating various conditions for preliminary load testing by changing the number of test threads and loops. Finally, I have setup a HTTP Authorization Manager for basic authorization against an HTTP server if it should be need it in the course of testing.

Test Plan – Testing Thread Groups

Once the foundational configurations have been completed, I will then set up a series of test threads for each architectural component.  I’ve configured four Thread Groups that are named then to reflect the components of the solution architecture.

TopView.png

The LDAP Test Case

The first Thread Group provides a test for the LDAP services. For the most part this test thread is taken from the JMeter documentation, and has been modified for this solution. I will use this test for checking the availability of the LDAP server, as well as to query the Active Directory for the DN’s or other LDAP attributes that I may be interested in.

LDAP-Test Run.png

This is the Test Result view for a series of LDAP test queries. There are several things to point out.

  • Take note that several items in the test case are greyed out. This means that these test are disabled for this test run.
  • The LDAP Thread Group has been enabled. Note that for several of the test cases, there are XPath Assertions. Which I can use for a deeper evaluation of a successful test case.
  • The Individual Test Results View has a View Tree and three Tab Views. “Sampler result”, “Request” and “Response Data”. The View Tree provides feedback on each.

The Tree View Output indicates the outcome of the test case.

  1. Basic Request using various Filters
  2. 2. Search Test, 2.1 Search Test and 2.2 Search Test indicate that some thing in the test case has failed.
  3. In this case I have highlighted the Response Code 800 exception that was returned from AD-LDAP query.
  4. However, the Response also returned a DN, based on the test parameter “Search with filter” (sAMAccountName=adminuser)

The Compare Test “Passed” a view of the Response data Tab would provide the following xml response:

<ldapanswer>
<operation>
<opertype>compare</opertype>
<comparedn>cn=dparch, ou=DatapowerESB</comparedn>
<comparefilter>sAMAccountName=dparch</comparefilter>
</operation>
<responsecode>0</responsecode>
<responsemessage>Success</responsemessage>
</ldapanswer>

The MQ Test Case

The next test case is the testing for MQ as a JMS Point-to-Point and also the use of a custom extension for sending and receiving messages from MQ. In the most basic scenario JMeter is setup to PUT an XML Payload message to Queue that has been configured using JMS Context and the JNDI Properties supplied by the MQ administrator.

In the JMS Point-to-Point scenario, the ESB has been configured as a webservce proxy. In this case the test results must be viewed as a part of the ESB’s transaction history. Additionally, by using the IBM MQ explorer you can also check current queue depth if problems have been encountered at the message consumer end-point. This is a simple solution it has it’s advantages. You run a series of P2P test cases. By cutting and pasting messages built in your development environment into the content window, which are then placed on the Queue. While this has included the use of other development components. The test case is repeatable throughout the development and testing lifecycle.

JMS-Point-to-Point.png

In this next second scenario, the ESB has again been set up as a webservce proxy. However, the architecture calls for protocol mediation between the Client to the the ESB  and the service end-point, which is a typical event driven architectural solution.

In this test case JMeter will use a custom Java Request extension. The JMeter source code package provides fairly simple set of reference class implementations, which can be used to extend the tools capability to operate as provider/consumer of messages on the Queues. While this approach is a bit more sophisticated, in that it requires some Java development. It provides an excellent value add in terms of a reusable end-to-end testing capability. The screenshot shows the use of a custom SendMessage class, that has been set up to read in a set of parameters for the Send Queue, there is a corresponding GetMessage class for the Response Queue.

Additional to the MQ setup, there are 3 parameter values which enhances this configuration.

  • Service Name = Service identifier for the target end-point.
  • ClientName = Operation request identifier.
  • BaseDir = Location of the Message payload data file.

Using these parameters JMeter will take the data content in the file identified by the Service Name value as the name for the message file. For example, the SOAP message that was used in the P2P content window can now be placed in a file and read by a Java extension class, and then PUT on the send queue. A get operation can then be be used to read the response queue. This approach can provide flexibility in terms of test automation and load testing.

Busi-toServ.png

For this type of test case the result is rather straightforward. The Response Tab would simply read:  Send was successful: However because this is an end-to-end test. The GET queue provides the round trip test case. I can then evaluate the Request against the expected Response. In this example the Response queue is reporting a failure.  While not show here, the reply could be a Soap fault message or some other fault returned by the queue manager. This I will add test Assertions to evaluate the reply message using an XPath expression. This approach is dependent on the goal of the test case.

Event=-Driven-Test.png

This is also where I will turn to the Aggregate test report. By changing the Normal User Parameter value to apply various load factors. In this example I have set JMeter for 5 Threads with 10 loops through the test case.

AggReport.png

The results are recorded in milliseconds:

  • Average – The average time of a set of results
  • Median – The median is the time in the middle of a set of results. 50% of the samples took no more than this time; the remainder took at least as long.
  • 90% Line – 90% of the samples took no more than this time. The remaining samples at least as long as this. (90 th percentile )
  • Min – The shortest time for the samples with the same label
  • Max – The longest time for the samples with the same label
  • Error % – Percent of requests with errors
  • Throughput – the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
  • Kb/sec – The throughput measured in Kilobytes per second

Conclusion

In part 1 I’ve tested two components of in my architectural scenario. I’ve also introduced the use of the extension capabilities to cover an end-to-end rest case. In part 2 I will build out the test case for RESTful testing,  as well as the FTP server test case. In part 3 I will cover how to use the Java extension capability of JMeter to enhance my testing capability, as well as provide a reusable test plan for QA and some preliminary integration performance testing.

DataPower XC 10 seamless integration with DataPower XI 52

$
0
0

The need for cache is inevitable in the world of enterprise-level applications. There are boundless use cases with different combinations ranging from the data type specific cache to application specific cache. We have seen a wide variety of caching technology realizations from emerging technologies like Redis (in reality, it is more than a cache) to time-proven, simple DynaCache, which serves our purpose of caching at different levels.

IBM offers mainly two solutions for cache:

  1. IBM WebSphere DataPower XC10 Appliance
  2. WebSphere eXtreme Scale.

XC 10 can be considered a simplified version of the eXtreme scale, which can be used for HTTP session management, extension of DynaCache and also as a side cache for memory-intensive applications like IBM WebSphere Portal.

I was inspired by our recent successful implementation of DataPower XC 10 with DataPower XI 52 and WebSphere Portal for one of my clients, where the use case was to cache huge data objects ranging from 2 MB to 30 MB. We designed a tutorial to help with complete end-to-end to setup and implemented a use case to experience the power and ease of setup of XC 10 device.

In the tutorial, our use case will be to setup cache for a service that will connect to “OPEN WEATHER MAP API” (http://openweathermap.org/API ) to get the weather for a given city and we will cache the responses from the service. Even though this scenario might not be the appropriate caching scenario, we will use it for simple demonstration purposes.

With the introduction of firmware V 6.0 for the XI 52, the integration between XI 52 and XC 10 has become seamless. We can get the integration set up and working in a hour. In the tutorial, we will show the setup of DataPower XC10 seamless integration with XI 52. The below diagram shows the high level setup overview where XI52 will connect to open weather map API.

 

XC10DevSetup

You can download the tutorial here XC10IntegrationWithXI52.

Please leave your comments or questions regarding the setup below.

References

  1. Enterprise Caching Solutions using IBM WebSphere DataPower SOA Appliances and IBM WebSphere eXtreme Scale http://www.redbooks.ibm.com/abstracts/sg248043.html

 

Datapower XI 52 and XC 10 Integration:Encode/decode the cache key

$
0
0

From our previous blog post we have seen how easy it is to setup a seamless integration between the XI 52 and XC 10. In this blog post, we dive into a little more detail of the seamless integration by understanding how datapower encodes and decodes the cache key.

Datapower XI 52 uses the URL as the cache key for the storing and retrieving of the object from XC 10 caching device. When a request is made to the XI52, XI 52 will make a call to the backend and store the successful response in XC 10 device with encoded URL as cache key. For the consecutive requests XI 52 will use the encoded request URL as cache key to retrieve it from the cache. The conversion of the URI into cache key is two-step process,

  1. The request URL is base64 encoded with input character set as UTF-8
  2. Then result is then URL encoded.

The result will be a cache key which can be used to retrieve the cached object.

For example,

  1. Let’s assume the request URL for a service on Datapower XI 52 with seamless integration enabled with XC 10 stored in grid name “physician” is
    http://www.hospitial.com:5454/physician/appointments/list?StartDate=08/14/2014&EndDate=08/26/2014
  2.  The base 64 encoded version will be
    aHR0cDovL3d3dy5ob3NwaXRpYWwuY29tOjU0NTQvcGh5c2ljaWFuL2FwcG9pbnRtZW50cy9saXN0P1N0YXJ0RGF0ZT0wOC8xNC8yMDE0JkVuZERhdGU9MDgvMjYvMjAxNA==
  3.  By URL encoding the above result we will get the cache key aHR0cDovL3d3dy5ob3NwaXRpYWwuY29tOjU0NTQvcGh5c2ljaWFuL2FwcG9pbnRtZW50cy9saXN0P1N0YXJ0RGF0ZT0wOC8xNC8yMDE0JkVuZERhdGU9MDgvMjYvMjAxNA%3D%3D

To retrieve the cached object using the cache key we need to point to correct data grid.

https://CacheCollective/resources/datacaches/{gridname} /{gridname.LUT}/{cachekey}

Calling the above URL with valid credentials will retrieve the cached object.In our example the URL will look like

https://CacheCollective/resources/datacaches/ physician/ physician.LUT/ aHR0cDovL3d3dy5ob3NwaXRpYWwuY29tOjU0NTQvcGh5c2ljaWFuL2FwcG9pbnRtZW50cy9saXN0P1N0YXJ0RGF0ZT0wOC8xNC8yMDE0JkVuZERhdGU9MDgvMjYvMjAxNA%3D%3D

CURL Command for cache object retrieval

curl –user <username>:<password> -k https://{XC10deviceaddress/resources/datacaches/ physician/ physician.LUT/ aHR0cDovL3d3dy5ob3NwaXRpYWwuY29tOjU0NTQvcGh5c2ljaWFuL2FwcG9pbnRtZW50cy9saXN0P1N0YXJ0RGF0ZT0wOC8xNC8yMDE0JkVuZERhdGU9MDgvMjYvMjAxNA%3D%3D

The following are the scenarios decoding the XC 10 key will be helpful

  1. Retrieve the object stored using seamless integration from different Multi-protocol gateway.
  2. Building the URL to call through CURL/command line
  3. Debugging the integration issues

Links

  1. Base 64 encoding http://www.base64encode.org/
  2. URL encoder http://meyerweb.com/eric/tools/dencoder/

Building an ESB Capability

$
0
0

Building ESB Capability Java EE -vs- Configuring a Datapower SOA Appliance

Implementing a Java network infrastructure solution versus network appliance configuration

It’s not unusual for a seasoned Java implementer, when exposed to an IBM Datapower appliance for the first time to question the technological advantage of a configurable network device. I feel this question is best examined from an application architecture perspective.

Fundamentally, every implementation is the realization of a prescribed software architecture pattern and approach. From this viewpoint I’ll use a lightweight architectural tradeoff analysis technique to analyze the suitability of a particular implementation from the perspective of two technology stacks. The Java Spring framework combined with Spring Integration extensions and the IBM DataPower SOA appliance.

In this tradeoff analysis I will show the advantage of rapidly building and extending a bus capability using a configurable platform technology, versus Spring application framework components and the inversion of control container.

High-Level Requirements

The generic Use Case scenario: receive an XML message over a http, transform the XML input message  into SOAP/XML format, and deliver the payload to a client over an MQ channel.

Proposed Solution Architecture

Solution 1

Using an EIP pattern to provide a conceptual architecture and context lets consider the following ESB type capability. This solution calls for a message gateway, message format translator, and a channel adapter.

Assumptions

  1. The initial release will not address the supplemental requirements, such as logging, persistent message delivery and error back-out.
  2. This next release will be extended to include a data access feature, as well as the supplemental requirements.
  3. Message end-points, message formats, queue configurations, database access and stored procedure definitions have all been fully documented for this development life-cycle sprint.

Architectural Definition

  • To receive messages over HTTP you need to use an HTTP Inbound Channel Adapter or Gateway.
  • The Channel Adapter component is an endpoint that connects a Message Channel to some other system or transport.
    • Channel Adapters may be either inbound or outbound.
  • The Message Transformer is responsible for converting a message’s content or structure and returning or forwarding the modified message.
  • IBM MQ 7.x has been supplied as part of the messaging infrastructure capability.

Technology Stack Requirements

Spring / Java SE Technical Reference – Standards Information Base

  • Spring 4.0.x
  • Java SE 6 or 7
  • Spring Extension: Spring Integration Framework 4.1.x
  • Spring Extension: XML support for Spring Integration
  • Apache tomcat 7.x.x
  • Spring run-time execution environment (IoC container)
  • Eclipse for Spring IDE  Indigo 3.7 / Maven

DataPower XI/XG Appliance Technical Reference – Standards Information Base

  • Configurable Multi-protocol gateway (XG45 7198 or or XI52 – 7199)
  • XSLT editor- XMLSpy (Optional)
  • Eclipse for Spring IDE  Indigo 3.7 (Optional)

Architecture Tradeoff – Analysis Criteria  

For the application architectural analysis I will use following architecture “illities”

  • Development velocity
    • In terms of code base, development task, unit testing.

Development Velocity Analysis – Design and Implementation Estimates

Assumptions

  1. Development environments, Unit Test cases / tools, have been factored into the estimates.
  2. Run-time environments must be fully provisioned
  3. Estimates based on 6.5 hour work day
  4. 2 development resources for the implementation (1 Development Lead and 1 Developer)

Java SE using Spring Framework and Spring Integration Extensions.

Java EE Spring Framework
Architecture Component Design Component(s) Development Task Effort / Hr.
Message Gateway Http Inbound Gateway XML wiring http Inbound Adapter 6.5
Http Namespace Support XML wiring of Spring Component 6.5
Timeout Handling XML wiring of Spring Component 6.5
Http Server Apache / Jetty Build Web Server instance 12
Exception Handling Error Handling XML wiring of Spring Component 12
Message Transformer XsltPayloadTransformer XML wiring of Spring Component 13
Transformation Templates Build XML Transformation Template 12
Results Transformer XML wiring of Spring Component 13
Chanel Adaptor (Direct Channel) XML wiring Outbound Gateway 2.5
Build Attribute Reference File 12
Estimation hrs
96
Estimation of Duration (Days) 15

DataPower SOA appliance with standard configuration components.

DataPower Appliance
Architecture Component Design Component(s) Development Task Effort / Hr.
Message Gateway Multi-protocol Gateway Name and Configure MPG 3
XML Manager Name and Configure XML Manager
Message Transformer Multi-Step Transform Action Build XSLT Transformation Code 13
Chanel Adapter (Direct Channel) MQ Manager Object Name and Configure MQ Manager 2
Estimation 18
Estimation of Duration (Days) 3

Architecture – Architectural Tradeoff Analysis

In terms of development velocity a DataPower implementation requires approximately 70% less effort. This is primarily due to DataPowers’ Service Component Architecture design and the forms based WebGUI tool that is used to enable configuration features and input required parameters for the service components.

DataPower Services

The Java development velocity may be improved by adding development resources to Java implementation, however this will increase development cost and complexity to the overall project. Efforts around XML transformations are for the most part equal, the Spring framework and DataPower will use XSLT templates to implement this functionality.

Use Case Description for next release

In the next development iteration, our new Use Case calls for additional data from a legacy business application. Additionally, a supplemental requirement for persistent messaging with MQ Backout for undelivered messages on the channel.

Extended Solution Architecture

Solution 2

Development Extension Analysis – Design and Implementation Estimates

Assumptions

  1. Message end-points, message formats, queue configurations, database access and stored procedures have all been defined documented for the development life-cycle.

Architectural Definition

  • Must access a stored procedure from legacy relational database.
  • Must support Message Channel to which errors can be sent for processing.

Java SE using Spring Framework and Spring Integration Extensions

Java EE Spring Framework
Architecture Component Design Component(s) Development Task Effort / Hr.
SQL Data Access JDBC Message Store XML wiring of Spring Component 6.5
Stored Procedure Inbound XML wiring of Spring Component 8
Configuration Attributes XML wiring of Spring Component 3
Stored Procedure parameters XML wiring of Spring Component 3
Process SQL Validation/Processing  of SQL DataSet 9
Estimation 28.5
Estimation of Duration (Days) 5

DataPower SOA appliance with standard configuration components

DataPower Appliance
Architecture Component Design Component(s) Development Task Effort / Hr.
SQL Data Access SQL Resource Manager Configure Db Resource 2
Process SQL XSLT Transformer – Database Build XSLT Transformation Code 10
Estimation 12
Estimation of Duration (Days) 2

Architecture Tradeoff – Analysis Criteria  

For the application architectural analysis I will use following architecture “illities”

  • Extensibility
    • Adding persistent messaging on the channel with back-out functionality.
    • Adding data access and stored procedure execution from legacy database.

Architecture – Architectural Tradeoff Analysis

In terms of development extensibility the DataPower implementation requires approximately 50% less effort. This is primarily because, extending DataPower for these new requirements will not require additional programming for the data access functionality.

Again for this additional functionality the processing of the SQL stored procedure Dataset will require a programming effort for both implementations. The primary difference for Spring is the addition of 3 new components versus the configuration of a database access component on the DataPower appliance.

In terms of adding persistent messaging with back-out functionally. DataPowers’ built-in queue management service requires the implementer to enter the defined queue parameters. This is a net zero programming capability.

Conclusion

Undoubtedly the Spring framework along with Spring integration and the inversion of control (IoC) container, provides the Java developer with powerful application framework with functions that are essential in messaging or event-driven architectures.

However, the DataPower appliance offers this functionality as a purpose built non-disruptive network device out-of-the-box. In short DataPower is the concrete implementation of much of what Spring and the Integration Framework offers programmatically.

As cross-cutting concerns and non-functional requirements around security and webservice integration emerge the configuration capability of the appliance will become even more apparent.

Integrating IBM Integration Bus with DataPower XC10 – (REST APIs)

$
0
0

Introduction

In this article we will discuss about how you can achieve the integration between IBM Integration Bus (IIB) and WebSphere DataPower XC10 (XC10) using the REST APIs functionality of XC10 devices. Since the release of the V9, IIB can now connect to an external data grid cache to improve SOA’s services performance by the use of a eXtreme Scale client. But the good thing about integrating with the XC10 REST APIs is that it can also be achieved by previous and earliest versions of WebSphere Message Broker (WMB) like 7 and 8.

For a complete understanding of Global Cache and External Cache on IIB, please consider the following link:
http://www.ibm.com/developerworks/websphere/library/techarticles/1212_hart/1212_hart.html

If you’re looking for details on how to achieve this integration using the Java APIs MbGlobalMap, please visit:
http://www.ibm.com/developerworks/websphere/library/techarticles/1406_gupta/1406_gupta.html

Also, a detailed explanation of the Side Cache pattern in a SOA architecture is beyond the scope of this article. To learn more about the pattern, please consider the following links:

The requester side caching pattern specification, Part 1: Overview of the requester side caching pattern
http://www.ibm.com/developerworks/webservices/library/ws-rscp1/

Cache mediation pattern specification: an overview
http://www.ibm.com/developerworks/library/ws-soa-cachemed/


Assumptions

Before going ahead, let’s just explain some of terms that you will find from now on during the rest of the article:

Cache Hit

IIB will first query the XC10 cache when receiving a request for a cached service. If the requested data is found on the cache – HTTP Return Code 200 OK received from XC10 API, the request can be fulfilled by the cache, avoiding a backend call.

The flow stops right there by returning the data to the client, no further processing is needed and the most expensive task, which is to call the backend, is avoided.

Cache Miss

On the opposite side, if IIB doesn’t find the requested data on the cache – HTTP Return Code 404 received from XC10 API, it will need to invoke the backend. After the data is retrieved from the backend, it will be inserted into the cache, for the use and speed up of the subsequent calls.

Bottom line is: Cache Hit is faster than Cache Miss. The best performance experience can be achieved when the majority of the requests can be served by the cache. That is the goal point, to have the end-to-end performance maximized by the use of the side cache pattern. And that is done transparently to the client, who assumes it is still hitting the backend.


 

Scenario

For the scope of this article, we will be caching a SOAP Web Service (WS) response “mocked” at SoapUI. To better illustrate the scenario, we are adding delay on the SoapUI mock responses.

The same Caching concept applies to database queries or different types of backends. The most important part is a good understanding of what type of data can be cached – static data vs dynamic data, which will depend on your environment, architecture and service usability.

Proposed Architecture

The below diagram illustrates the proposed architecture to achieve the Side Cache pattern for a SOAP WS using IIB and XC10 REST APIs:

IIBXC10Architecture

 


 

IBM Integration Bus Architecture

Security

Before introducing the message flows and sub-flows that were used to achieve the integration, we will cover two important steps related to the XC10 security on IIB.
By default, XC10 REST APIs does requires HTTP Basic Authentication. So you will need to perform the following configurations on IIB:

Security profile for HTTP Basic Auth

Using the MQSI command console, issue the commands below to register the user credentials and to create a security profile:

$ mqsisetdbparms <BROKERNAME> -n <securityIdName> -u <user> -p <pass>

$ mqsicreateconfigurableservice <BROKERNAME> -c SecurityProfiles -o <securityProfileName> -n “propagation,idToPropagateToTransport,transportPropagationConfig” -v “TRUE,STATIC ID,<securityIdName>”

Attaching the securityProfile to the BAR file

Once the security profile is created, we need to attach it to the BAR file. At this point you should not have your BAR file ready yet, but since it’s a pretty straightforward task, we will cover it now.
Click on the BAR file, on the Manage tab, expand your application and select the Message Flow as shown on the picture. On the Configure properties that appears below, scroll down and set the security profile you just created. IIB will now add the Basic Auth header on the HTTP Requests used on this flow.

IIBXC10Architecture_2

 

Overall MsgFlow

This is an overview of the complete message flow implementing the side cache pattern.

IIBXC10Architecture_3

A brief description of the flow can be:

The SOAP Input nodes exposes a Web Service interface. Once being called, the subsequent sub-flow SF_CacheQuery will first check if the data requested is cached by hitting the XC10 API (HTTP GET Method). If that is successful, the response is returned to the client immediately and no subsequent processing is done. Otherwise, the Invoke_BE_getCustomer node will call the SOAP Web Service. A Flow Order will first return the response to the Client and, after that the SF_CahceInsert sub-flow will insert this response data into the XC10 cache grid (HTTP POST Method).

Note that neither error handling nor retry logic when calling the XC10 APIs and the SOAP backend was implemented. You will certainly need to improve that and build your own flow and adapt it based on your own needs.

Cache Query SubFlow

As described above, this SubFlow will query the XC10 Cache Grid by sending a GET method to the REST API and using one of the request incoming data as the KeyIdentifier.
You should consider using a small timeout period on every call to the XC10 because we don’t want to cause an overhead processing time to the client in case of connections problems or if the XC10 is not available. For example, in this PoC, I used a timeout of 2 seconds.

IIBXC10Architecture_4

Set CacheQuery Params Compute Node – ESQL

The below ESQL contains the code used to achieve the Query Cache. First we save the incoming request into the variables for further usage.
After that the InputKEY is referenced from the incoming SOAP body. This is the key that will be used to query/insert the data in XC10. As the name suggests, you need to use something that uniquely distinguishes a request from others. If needed, you can even consider concatenating two or more fields and use that as your key.
After that we are ready to Query the XC10 cache by overwriting the HTTP Method to GET and then specifying the RequestURL in the XC10 REST API notation.
A successful response of that is acknowledged by XC10 with a HTTP 200 along with the data previously cached.

–Storing incoming request in case of CacheMiss
SET Environment.Variable.InputMessage = InputRoot.SOAP;

–Getting ID from request which is the KEY for XC10 Cache
DECLARE InputKEY REFERENCE TO InputRoot.SOAP.Body.ns:getCustomerRequest.ID;

–GET – Query Cache
SET OutputLocalEnvironment.Destination.HTTP.RequestLine.Method = ‘GET';
–XC10 URL for Query Cache
SET OutputLocalEnvironment.Destination.HTTP.RequestURL = ‘http://192.168.122.1:7000/resources/datacaches/’ || CACHENAME || ‘/’ || CACHENAME || ‘/’ || CAST(InputKEY AS CHARACTER);

Note: Since we’re overwriting the HTTPRequest properties from a previous node, make sure you have the Compute Node propertie Compute Scope set to LocalEnvironment and Message:

IIBXC10Architecture_LocalEnv

 

Cache Insert SubFlow

After a Cache Miss, this SubFlow will insert the response data into the XC10 Cache Grid by sending a HTTP POST request method to the REST API and using one of the request incoming data as the KeyIdentifier. The data to be cached is mandatory and is sent as the request payload.

Remembering, you should consider using a small timeout period on every call to the XC10 because we don’t want to cause an overhead processing time to the client in case of connections problems or if the XC10 is not available. For example, in this PoC, I used a timeout of 2 seconds.

IIBXC10Architecture_5

Set CacheInsert Params Compute Node – ESQL

The below ESQL contains the code used to achieve the Insert Cache. First we save the incoming request into the variables for further usage.
After that the InputKEY is referenced from the incoming SOAP body. This is the key that will be used to query/insert the data in XC10. As the name suggests, you need to use something that uniquely distinguishes a request from others. If needed, you can even consider concatenating two or more fields and use that as your key.
After that we are ready to Query the XC10 cache by overwriting the HTTP Method to GET and then specifying the RequestURL in the XC10 REST API notation.
A successful response of that is acknowledged by XC10 with a HTTP 200 along with the data previously cached.

–Getting ID from request which is the KEY for XC10 Cache
DECLARE InputKEY REFERENCE TO Environment.Variable.InputMessage.Body.v1:getCustomerRequest.ID;

–POST – Insert Cache
SET OutputLocalEnvironment.Destination.HTTP.RequestLine.Method = ‘POST';
–XC10 URL for Insert Cache
SET OutputLocalEnvironment.Destination.HTTP.RequestURL = ‘http://192.168.122.1:7000/resources/datacaches/’ || CACHENAME || ‘/’ || CACHENAME || ‘/’ || CAST(InputKEY AS CHARACTER);

Note: Since we’re overwriting the HTTPRequest properties from a previous node, make sure you have the Compute Node propertie Compute Scope set to LocalEnvironment and Message:

IIBXC10Architecture_LocalEnv

 


 

XC10 REST APIs

It’s beyond the scope of the article to explain all the potential and possibilities that can be achieved by using the XC10 REST APIs.
For a complete reference and usability functions available, please visit:
http://www-01.ibm.com/support/knowledgecenter/SSS8GR_2.5.0/com.ibm.websphere.datapower.xc.doc/tdevrest.html

POST

You can use any HTTP client to interact with the XC10 REST APIs. HTTPRequest nodes are used on IIB, but you can also use cURL or SoapUI for testing purposes. There’s a sample SoapUI project along with the files available for download. cURL commands samples are available at the Appendix chapter.

Insert Data into the XC10 for test

HTTP method: POST
HTTP header: ’Content-type: text/xml;charset=UTF-8’
URI: /resources/datacaches/[grid_name]/[map_name]/[key]

POST data:
’<xml>Sample Data</xml>’

Response:
200

GET

You can use any HTTP client to interact with the XC10 REST APIs. HTTPRequest nodes are used on IIB, but you can also use cURL or SoapUI for testing purposes. There’s a sample SoapUI project along with the files available for download. cURL commands samples are available at the Appendix chapter.

Retrieve (GET) Cache Data from the XC10
HTTP method: GET
URI: /resources/datacaches/[grid_name]/[map_name]/[key]

Returns Data cached previously:
‘<xml>Sample Data</xml>’

If key doesn’t exist, returns HTTP error:
404

Monitoring

XC10 offers native monitoring of for each Data Grid. Using the GUI, just follow Monitor -> Individual Data Grid Overview -> click on your Grid. The below monitoring and performance metrics will appear.

IIBXC10Architecture_6

 


 

Conclusion

In this article we covered how easy it is to implement the Side Cache Pattern on a SOA architecture using IBM Integration Bus and WebSphere DataPower XC10 to speed up performance. This can be used on a variety of backends and not only SOAP Web Services. The XC10 REST APIs is a pretty good interface and provides all necessary functions to integrate pretty straight forward. In our scenario we solved a “slow SOAP Web Service backend” problem by caching the response data into the XC10 Data Grid.


 

Appendix

Insert (POST) Cache Data example using command utility “cURL”

curl -u <user>:<pass> -H ‘Content-type: text/xml;charset=UTF-8′ -X POST -d ‘<xml>Sample Data</xml>’ http://<xc10hostname>/resources/datacaches/IIB_POC/IIB_POC/1020

Retrieve (GET) Cache Data example using command utility “cURL”

curl -u <user>:<pass> -X GET http://<xc10hostname>/resources/datacaches/IIB_POC/IIB_POC/1020

Consider using SoapUI for a more friendly GUI instead of cuRL. There’s a sample SoapUI project along with the files available for download.

 

Troubleshooting Tools

Consider using troubleshooting tools such as NetTools web debugging tool to sit in the middle between IIB and XC10.
Download at: http://sourceforge.net/projects/nettool/files/

 

DOWNLOAD IBM INTEGRATION BUS PROJECT INTERCHANGE SAMPLE CODE
IIB_XC10

DOWNLOAD SOAPUI PROJECTS
IIBXC10_SoapUI

Viewing all 44 articles
Browse latest View live