Monday, March 2, 2009

Portal as a broker


The portal 

To most people a portal is a website or, to the more daring, a website on steroids. This view is short-sighted and lacks an overall holistic vision, because a website is but one element of a portal. Even if its interface is a website, the portal itself is not. The portal is the environment created by the natural evolution of distributed information, dissemination, data aggregation and data management. 

Distributed information is information that often resides in branch offices or smaller applications. This vast pool of information is being collected and managed away from a central source, yet it is vital to the branch operation and is even more critical for the whole organization. This is similar to the analogy of a series of blind men trying to describe an elephant from each of their static locations. Each description is correct from their perspective, but a complete picture is difficult to articulate. 

Aggregation is the ability to store the data in a central location for easy retrieval. By categorizing the data into smaller, more meaningful portions the data starts to take shape as information. In the end the user is responsible for sifting the information from the data. This aggregation was the mainstay of the knowledge management and business intelligence revolution; but there was an anomaly: If a user had access to all the data, why were they still unable to make better decisions? The answer lies in the usefulness and timeliness of the information. Aggregation was key, but not completely sufficient to answer the question. 

Management of data is that balancing act that occurs when you can have distributed data, with the aggregation of information. How do you best manage the data to make aggregation possible, and distribution easy?  In future posts, I will discuss the path data takes from its inception to wisdom. 

If we look at the Internet for similar correlation to the developmental stages of the concepts of portals, we will find a first wave of websites where a wealth of data on specific topics was located. The Internet boasts of storing ninety percent of mankind’s knowledge. Children today have access to the human genome and the entire works of the greatest authors of their time and any other.  However, finding the data is troublesome and deciphering its value becomes a greater challenge still. This may explain why our children have not surpassed their parents in terms of intelligence. Proving access to the data does not equate to usefulness.  The portal allows information to be useful, relevant and achievable. 

The first wave of portals started to take the aggregation approach. The simple approach was to build portals based on subject matter. So, we had project manager portals, employee portals, travel portals, and more. Each portal became an aggregation point of data relevant to the portal’s theme. In fact, the data was organized and categorized based on further defining the framed data into smaller portal-relevant data sets. 

Yet again, an anomaly exists for many people trying to understand portals: Why are people not flocking to these portal sites? The perception of the anomaly is actually the problem; understanding the interaction between the people and the site is the key. The portal is not a site, so by reason of purpose the portal needs to be more than just a collection of similar data objects. People are not trained monkeys, contrary to popular “dot com” thinking, and do not participate in a site just for its data objects for any longer than the data provides value. There must be more than organized data to provide a portal to information.  This can be witnessed by the sudden surge of social sites.  The initial value is there along with the long term reason to keep interacting. 

So, what is the portal if it is not a site? The answer is that it is a “broker.”

Broker…

The portal is the component broker for the layers of communications (human and electronic), technology, and applications and, more importantly, the business drivers of the organization.  The graphic is the simple representation of the portal links the person to the content and the application, along with applications being linked to content.  This brokerage is the vehicle for that basic human need to filter data and to query its content. The machines categorize the information into a meaningful perspective so each supplier of data, or requestor of data, can quickly and effectively process the data into “information packages” that have relevance to them. This packaging is then used to assist human decision-making or application-decision execution irrespective of physical or organizational location. 

Human decision-making is that fine art of actually acting on information. When we use information to make a decision it becomes knowledge. Knowledge becomes the outcome of the action. How we shape the data, to decipher the information, to make the decision, is the difficult part.  Without the decision we have “analysis paralysis”.  The path is from raw data to information to knowledge to finally wisdom. 

Humans have their burden eased by the portal in this “decision-tree” through personalization and customization. Personalization is created by the environment, based on who the user is, and what role they are currently playing. When the system knows who you are it is able to tailor the experience.  Customization is when the user decides to alter their user experience to best fit their needs.  This can be in form of skins, layout and addition or removal of portal parts. 

Application-decision execution occurs when programs look to be triggered based on data they receive. Today this is very popular in stock markets, where programs wait to see if a certain stock rises for falls; and based on this action, the program will automatically sell or buy stocks. Likewise, system management software will look at system log information and will be triggered to perform actions based on thresholds or alarm values. 

These application-decision algorithms can be beneficial or detrimental depending on the human value placed on their actions. An example was the negative effect of mass selling after the disastrous September 11th Twin Towers act of terrorism. Automatic programs went amok, selling based on other auto-sellers taking action. The result was a spiralling of negative activity. It took human intervention to stop the downward spiral of stock prices by these machine-generated sell options. This clearly identifies the risk we run by deferring responsibility and accountability to program logic. Just because the tools make a job easier does not mean they don’t require a craftsman to complete a job well. The hammer is used for nailing, but still requires the vision and purpose of the craftsman to do the job correctly. A portal can at least package information in a way that makes these sorts of application-decisions more effective and efficient. 

The role of the broker is essentially to connect the service requestor and the information supplier. So the portal, when acting as the broker, has no direct form. Basically, you can’t hug a portal like you can a website, because it is not a single, central physical thing. It is the void of the data interchange. It is both full of knowledge and lacking in data; everything and nothing at the same instance. 

No end-state… 

First and foremost, a portal has no end-state and is in constant change. Information becomes old or irrelevant, user demands are constantly changing, access to more data is inevitable, better tools to process the data become available, and so on. This change is both system-controlled and user-experience controlled, because the system allows for the categorization and the user-experience allows for content personalization. The user drives the demand, and the system drives the supply. This constant churn makes the portal a living entity with an uncertain beginning and no predefined end. 

For many, the confusion about a portal is purely an issue of perception that is largely based on the absence of an end-state. A portal is a component manifest of apparently disparate components until you take a higher view of its form, function and purpose. 

The Roles 

The dichotomy triangle shown above has three major roles and they are People, content and applications (Apps). 

People 

I am amazed at how many portals I visit that are built to impress the masses.  The user at the keyboard is the only person for that session and that should not be forgotten.  Your design and the user experience should keep this in mind.  It is not a flashy banner for the world to see but rather an experience geared for one person to whom you want to allow to work, to learn and to play at their speed and their direction.  From the portal’s perspective this is broken down into three flavours of personal identity.  The personal experiences are the anonymous, the synonymous and the known.

The anonymous experience is for those users whom do not want to be known and will not leave a presence behind.  This is the public internet site.  No user log on is required and all information and applications are for public use.   Example of this is googleWikipediaimdb etc.

The synonymous user is that user that wants to be identified but not by their real credentials.  This is for those sites where the user wants a unique setup or experience but has a desire not to be known specifically.  This blog site is an example of this type of user setup.  My real name is NOT buchi, however, my shared profile information is real, just my real identity is masked.  Other examples of this type of login are You Tube and Gantthead for project managers.

The known user is where a true identity is required.  This could be a corporate portal or your banking portal or any other portal that your true credentials are required.  Some portals will require even multiple methods of identity to confirm your electronic user credentials.

Content

This is where the vast majority of portals focus their efforts.  This is where raw information, documents, reference material and other mostly static information resides.  These could be documents, best practices, research material, snippets of code, blogs, news, movie clips, multimedia files etc.  The content is typically engaged by clicking on a hyperlink and the resultant content is forwarded to the client software which is typically a web browser.

Luckily, today there are tools to manage this content so the maintenance and upkeep can be done with minimal effort.  Tools like Sharepoint, Joomla, Mambo, Bricolage and Drupal  and countless others fill this void.

Applications

This is where the user interacts with real applications in the form of databases and dynamic data.  This could be a simple web application for shipping, tax returns, government permits etc.  The difference is the user is engaged in what data or information is presented, updated or deleted.  Some good examples of web application frameworks are mason, .NET, Java Virtual Machine and Tomcat. 

Applications can also update content via Content Management Solutions (CMS) stated earlier or by graphs and reports created by user interaction.  The loop becomes complete when all three are engaged seamlessly.

There are many tools out there that can be assembled to fit your organizational goals.  The technical goal should be to broker the person, with the content and the applications by way of the portal to the organizational goals.

 

 

 

 



2 comments:

  1. I have read all the posts, and had so little time to comment. Only one makes it here today: Consider that however true all this is, the problem of data fracturing is growing because many of these models are spreading data across repositories that don't conform to a management model that is consistent. This will be the next critical fault for technologically dependent businesses. We desperately need to forward a new methodology to approach this problem.

    ReplyDelete
  2. I agree completely on the fractured management issue. Most of this is cause by legacy DB firms trying to maintain relevance and market share. The approach I prefer to take at this time is to work in an application framework that is agnostic to databases and their structure. This does NOT remove the problem of fractured management but it does mitigate it. When you have a choice, then choose those DBs that conform to your organizational standard. Also, do not get caught in the trap chasing the next new feature at the expense of sticking to a working standard.

    ReplyDelete