The Five Key Stages Of Business Intelligence Information Technology Essay

Although the project was carried out personally, the guidance, contribution and support of several individuals had a great encouraging and positive impact on the project.. I warmly thank my supervisor Phil Molyneux for his support and guidance through out the project and whose supervision kept the project within its scope and deadlines.

During this work I have collaborated with many colleagues for whom I have great regard, and I wish to extend my warmest thanks to all those who have helped me with my work.

I owe my most sincere gratitude to my friend Mr. Ashwani Roy for introducing me to the world of Business Intelligence and helping me out to overcome the problems.

I would like to express my warm and sincere thanks to my parents and my brother for all their love, support and encouragement over all these years.

Most of all, to GOD for this unconditional love, for making me capable of overcoming the obstacles of life.

Thank you so much for all your support and help!

Business Intelligence

It is the process of transforming related business data into information, information into knowledge and with repetitive identification turning knowledge into Intelligence.

Data: It is a raw data i.e. it represents reality, facts and figures.

Information: It is a data that is processed and interpreted.

Knowledge: After the information is processed, it becomes more sensible containing some meaning and understanding.

Business Intelligence (BI) is a wide category of applications and technologies for collecting, storing, analyzing, and providing access to data to help the enterprise users make better decision making. BI applications supports the activities decision support systems, querying and reporting, online analytical processing (OLAP), statistical data analysis, forecasting and data mining.

Forrester Research often defines Business Intelligence in one of two ways. Typically, Forrester uses the following broad definition (Forrester,2008) :

Business Intelligence is a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information used to enable more effective strategic, tactical, and operational insights and decision-making.

A set of methodologies, processes, architectures, and technologies that leverage the output of information management processes for analysis, reporting, performance management, and information delivery.

“Business Intelligence is neither a product nor a system. It is an architecture and a collection of integrated operational as well as decision-support applications and databases that provide the business community easy access to business data” (Moss, 2003).

BI is the delivery of accurate, useful information to the decision makers within the needed timeframe to enhance decision making.

BI is not just some facts and figures on a printed report or computer screen. BI provides foundational information on which to base a decision. BI also provides feedback information that can be used to evaluate a decision.

BI systems enable organization to provide the users an insight about the company’s information assets.

The five key stages of Business Intelligence:

Data sourcing :

 Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents – e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access etc.

Data analysis:

Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarizing disparate information, validating models of understanding and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery.

Situation awareness :

  Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.

Risk assessment :

  Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarizing your best options or choices.

Decision support

 Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyze and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.

Some Definitions:

Transactional data is the info stored to track the interactions, or business transactions, carried out by an organization. (Brian Larson, 2008)

Online transaction processing (OLTP) systems record business interactions as they happen. They support the day to day operation of an organization. (Brian Larson, 2008)

Data Mart

A data mart is a body of historical data in an electronic repository that does not participate in the daily operations of the organization. Instead, this data is used to create business intelligence. The data in the data mart usually applies to a specific area of the organization. (Brian Larson, 2008)

While using organizational OLTP systems as a source for BI, number of problems can result. For that what we need to do is take the information stored in these OLTP systems and move it into a different data store. We need to store the data so it is available for BI needs outside of OLTP systems. When data is stored in this manner, it is referred to as a Data Mart. (data copied from OLTP systems periodically and written to data mart is known as data load)

When designing a data mart, the rules of normalization are replaced by a different method of design organized around “facts’. These new design approaches are called stars and snowflakes.

Data mart structure:

The data used for BI can be divided into four categories: measures, dimensions, attributes, and hierarchies. These four types of data help us to define the structure of data mart.

Measure:

Measure forms the basis of Business Intelligence. They are the basic building blocks of for effective decision making.

A measure is a numeric quantity expressing some aspect of the organization’s performance. The information represented by this quantity is used to evaluate the decision making and performance of the organization. A measure can also be called a fact. (Brian Larson, 2008)

Measures are the facts used for information. Therefore, the tables that have measure information are known as fact tables.

Measures: Three categories:

Distributive: if the result derived by applying the function to n aggregate values is the same as that derived by applying the function on all the data without partitioning.

E.g., count (), sum (), min(), max().

Algebraic: if it can be computed by an algebraic function with M arguments (where M is a bounded integer), each of which is obtained by applying a distributive aggregate function.

E.g., avg (), min_N (), standard deviation().

Holistic: if there is no constant bound on the storage size needed to describe a sub aggregate.

E.g., median (), mode (), rank ().

Dimension:

A dimension is a categorization used to spread out an aggregate measure to reveal its constituent parts.

Dimensions are used to facilitate slicing and dicing.

Measures and dimensions are stored in a data mart in one of the two layouts/schemas i.e. Star schema and Snowflake schema.

Star Schema

A star schema is a relational database schema used to hold measures and dimensions in a data mart. The measures are stored in a fact table and dimensions are stored in a dimension tables. (Brian Larson, 2008)

Star schema uses two types of tables: fact tables and dimension tables.

Attributes are the additional information about the dimension members in a data mart. They are also used to store the information that may be used to limit or filter the records selected from the data mart during the data analysis.

Attribute:

An attribute is an additional piece of information related to a dimension member that is not the unique identifier or the description of the member. (Brian Larson, 2008)

Hierarchy:

A dimension is part of the larger structure with many levels. This structure is known as a hierarchy.

A hierarchy is a structure made up of two or more levels of related dimensions. A dimension in the upper level of the hierarchy contains one or more dimensions from the next lower level of the hierarchy. (Brian Larson, 2008)

Snowflake schema:

Snowflake schema is an alternative to star schema. In a snowflake schema, each level of a hierarchy is stored in a separate dimension table. Snowflake schema is more complex than star schema because the tables describing the dimensions are normalized.

Snowflake schema contains all the advantages of a good relational design. It doesn’t result in duplicate data and is, therefore, easier to maintain. The only disadvantage of snowflake schema is that it requires a number of table joins when aggregating measures at upper levels of the hierarchy. Hence in larger data marts this can lead to performance problems.

Data Warehouse

Ralph Kimball defines the Data Warehouse as” the conglomeration of an organization’s data 

warehouse staging and presentation areas, where operational data is specifically structured for 

query and analysis performance and ease‐of‐use” (Kimball, 2002). 

In simple words, R. Kimball said “A data warehouse is a central repository for all or significant parts of the data that an enterprise’s various business systems collect”. 

He also provided a more concise definition of a data warehouse:

A data warehouse is a copy of transaction data specifically structured for query and analysis

Bill Inmon’s defines the Data Warehouse in more detail: 

“A data warehouse is a subject‐oriented, integrated, nonvolatile, and time‐variant collection of data 

in support of management’s decisions” (Inmon, 2002). 

• It is subject oriented since all data should be related to a specific subject instead 

of the company’s operations. 

• A Data Warehouse is defined as integrated, since the data is being fed from 

multiple disparate data sources into the data warehouse. 

• Nonvolatile since it stores historical data. 

• Time variant due to the fact that every record stored has been accurate at one 

moment in time. 

There are three data warehouse models(R. Kimball,2002) :

Enterprise warehouse

collects all of the information asset about the entire organization

Data Mart

a subset of data warehouse that is of value to a specific groups of users. Its scope is confined to specific, selected groups, such as marketing data mart

Virtual warehouse

A set of views over operational databases

Only some of the possible summary views may be materialized

Read also  Pipelining And Superscalar Architecture Information Technology Essay

Data Warehouse Usage (R.Kimball, 2002):

There are three kinds of data warehouse applications:

Information processing

This supports querying, basic statistical analysis, and reporting using crosstabs, tables, charts and graphs

Analytical processing

In this, multidimensional analysis of data warehouse data is done

It supports basic OLAP operations, slicing-dicing, drilling, pivoting

Data mining

Data mining is the analysis of data with the intent to discover gems of hidden information in the vast quantity of data that has been captured in the normal course of running the business (Moss, 2003). 

Hence, from the above definition, we can say that it discovers knowledge from the hidden parts

Moreover, it supports associations, constructing analytical models, performing classification and prediction, and presenting the mining results using visualization tools.

Goals of a Data warehouse:

Data warehouse must make an organization’s information easily accessible and consistent.

Data warehouse must assist in decision making process.

Data warehouse must be adaptive and flexible to the business changes.

Data warehouse must meet the business requirements.

Data warehouse development approaches:

Ralph Kimball and Bill Inmon formed the two different approaches to data warehouse design. These two approaches are: Top-down, bottom-up approaches or a combination of both.

Top-down approach: (Bill Inmon approach)

In top-down approach , first data warehouse is build and then the data marts.

Here the data warehouse is designed based on the normalized enterprise wide data model

In this approach DW acts as a single repository that feeds data into data marts

Advantages:

The top-down design approach exhibits highly consistent dimensional views of data across data marts as the data marts are loaded from the centralized repository.

This approach is robust against the business changes.

It is easy to create a data mart against the data stored in the data warehouse.

Disadvantages:

The top-down approach consumes more time in its implementation process.

In addition, top-down methodology sometimes become inflexible and unresponsive.

Bottom-up: (Ralph Kimball approach)

In bottom-up approach, first data marts are created and then data warehouse.

It starts with one data mart but later on more data marts can be added.

Advantages:

The bottom-up approach has a quick turn-around.

This approach is easier and faster to implement as one needs to deal just with smaller subject areas in the beginning.

Disadvantages:

In this approach, there is a long term risk of inconsistencies due to the use of multiple data marts.

Hence, needs to ensure the consistency of metadata.

Hybrid approach: (combination of top-down and bottom-up approaches)

This approach tried to mix the best of both top-down as well as bottom-up approaches.

There are some salient steps for applying this approach.

First step is to start the design by creating the data warehouse and data marts synchronously.

Build out first few data marts that are mutually exclusive and critical.

After this steps, backfill the data warehouse.

Then build the enterprise model and move the data to data warehouse.

Data warehouse Dimensional Model Phases

There are four dimensional model phases as follows:

Indentify the ‘Business Process’

Determine the ‘Grain’ (level of detail needed)

Identify the ‘Facts’

Identify the ‘Dimensions’

Business Process:

The most important thing in the business process is to identify the business requirements of a company and to analyze them thoroughly.

Grain of data – Granularity:

Granularity is the detailed data analysis captured from the data warehouse. More the detail, higher the granularity and vice-versa.

Fact table:

 It is similar to the transaction table in an OLTP system. It stores the facts or measures of the business. E.g.: SALES, ORDERS

Hence it contains the metrics resulting from a business process or measurement event, such as the sales ordering process or service call event.

In addition to the measurements, the only other things a fact table contains are foreign keys for the dimension tables.

Dimension table:

 It is similar to the master table in an OLTP system. It stores the textual descriptors of the business. E.g.: CUSTOMER, PRODUCT

Hence it contains the descriptive attributes and characteristics associated with speci¬c events, such as the customer, product, or sales representative associated with an order being placed.

Hierarchical many-to-one relationships are de-normalized into single dimension tables.

Data Integration Process Design ( still more to write about dimension etl and facts etl)

Extract, transform and load (ETL) is the core process of data integration and is typically associated with data warehousing. ETL tools are employed in order to populate data warehouse with up-to-date records extracted from source system, therefore, they are useful in organizing the steps of the whole process as a work flow. There are some prominent tasks carried out under this work flow which include: (i) the identification of the relevant information at the source side; (ii) the extraction of this information; (iii) the transportation of this information to the Data Staging Area (DSA), where most of the transformation takes place usually; (iv) the transformation (i.e., customization and integration) of the information extracted from the multiple sources into a common format; (v) the cleansing of the resulting data set, on the basis of the database and business rules; and (vi) the propagation and loading of the data to the data warehouse and the refreshment of data marts.

One important function of ETL is “cleansing” data. The ETL consolidation protocols also include the elimination of duplicate or fragmentary data, so that what passes from the E portion of the process to the L portion is easier to assimilate and/or store. Such cleansing operations can also include eliminating certain kinds of data from the process.

Data cleansing removes inconsistencies and errors from transactional data so it has the consistency necessary for use in data mart. (Brian Larson, 2008)

Data cleansing transforms data into a format that doesn’t cause problems in the data mart environment. It converts inconsistent data types into a single type. Data cleansing translates dissimilar identifiers to a standard set of codes for the data mart. In addition, it repairs or removes any data that does not meet the business rules required by the measures calculated from his data mart.

Data cleansing is usually done as a part of a larger process. This process extracts the data from the OLTP systems and loads it into a data mart. Thus, the entire procedure is known as extract, transform , and load – or ETL.

The Extract, Transform, and Load (ETL) process extracts data to copy from one or more OLTP systems, performs any required data cleansing to transform the data into a consistent format, and loads the cleansed data by inserting it into the data mart. (Brian Larson, 2008)

ETL Architecture (from Vincent’s book)

There are several ways of implementing ETL. The most prominent way is to pull the data from source systems, put it in a staging area it, and then transform it and load it into the data warehouse, as per the top diagram of the figure. Alternatively, instead of putting the data in a staging area, sometimes the ETL server directly with no staging does the transformation and then updates the data warehouse, as shown in the bottom diagram of the figure. The staging area is a physical database or files. Putting the data into the staging area means inserting it into the database or writing it in files.

SQL Server 2008 Business Intelligence Application Development (SSIS, SSAS, SSRS)

SSIS – SQL Server Integration Services, It is a Data Warehousing Tool, Developed by Microsoft.

Microsoft says that SQL Server Integration Services (SSIS) “is a platform for building high performance data integration solutions, including extraction, transformation, and load (ETL) packages for data warehousing.”

SSIS provides the ability to:

retrieve data from any source

Perform various transformations on the data; e.g. convert from one type to another, convert to uppercase or lowercase, perform calculations, etc.

load data into any source

define a workflow

The first version of SSIS was released with SQL Server 2005.  SSIS is a replacement for Data Transformation Services (DTS) which was available with SQL Server 7.0 and SQL Server 2000.  SSIS builds on the capabilities introduced with DTS.

SSAS – SQL Server Analysis Services:

Microsoft SQL Server 2005 Analysis Services (SSAS) delivers online analytical processing (OLAP) and data mining functionality for business intelligence applications. Analysis Services supports OLAP by letting you design, create, and manage multidimensional structures that contain data aggregated from other data sources, such as relational databases. For data mining applications, Analysis Services lets you design, create, and visualize data mining models that are constructed from other data sources by using a wide variety of industry-standard data mining algorithms.

SSRS-

Microsoft SQL Server 2005 Reporting Services (SSRS) delivers enterprise, Web-enabled reporting functionality so you can create reports that draw content from a variety of data sources, publish reports in various formats, and centrally manage security and subscriptions. For information about other SQL Server2005 components, tools, and resources. Microsoft SQL Server Reporting Services enables organizations to transform valuable enterprise data into shared information for insightful, timely decisions at a lower total cost of ownership.

SQL Server Reporting Services is a comprehensive, server-based solution that enables the creation, management, and delivery of both traditional, paper-oriented reports and interactive, Web-based reports. An integrated part of the Microsoft Business Intelligence framework, Reporting Services combines the data management capabilities of SQL Server and Microsoft Windows Server with familiar and powerful Microsoft Office System applications to deliver real-time information to support daily operations and drive decisions.

Staging process

With the help of SSIS (SQL Server Integration Services), we can load the data from trade capture system into the database. Here we can extract, transform and load (ETL) the packages for data warehousing.  Since behind this system there is no organization supporting it all of the data was created by collecting some information from external sources. 

Following are the steps to how to create the SSIS packages using Microsoft Visual Studio for this system.

Create a new flat file connection manager pointing to the source file you want to load. Also create a new OLEDB connection manager.

For the untyped load (as for in this system),

Drag and drop the Data Flow Task in control flow.

Double click on it and open it in Data Flow tab.

Drag a new flat file source. Point this to flat file connection you created before.

Read also  The History Of The Digital Watermarking Techniques

Verify that you can see the data.

Inside the Data Flow Task, drag a new OLEDB destination. Point it to the FRM db.

Create a new table and join it to the source. Check the mappings.

Run and verify.

For the typed load (as for in this system),

Again drag and drop the Data Flow Task in control flow.

Double click on it and open it in Data Flow tab.

Drag a new OLEDB source. Double click on it and point it to the FRM db. Create a new table and join it to the source. Also check the mappings. This OLEDB source will be of untyped load.

Now for the data conversion, drag a data conversion toolbox. Double click on it and select the column whose data type has to be changed.

Then again drag two OLEDB destination toolbox. (One will be the final successfully accomplished typed load destination and another will be the error generated destination)

Now double click on typed load destination, point it to the FRM db and create a new table and join it to the source. Also check the mappings.

After this double click on the error generated destination, point it to the FRM db and enter the table name created before and check the mappings.

Run and verify.

Above discussed are the overall simple steps to describe how SSIS packages are created. Now will see in detail how all these steps go in actual creation.

For this system, initially the data created was analyzed using Excel with .csv file extension. There are two .csv files in this system containing quite a large data. And these are the flat file source for this system.

Below are the images of these two .csv files.

Figure: bucketed .csv file

Figure: src_trade_1_risk file .csv file

Following figure shows the total number of created tables in FRM named database for this system.

Figure: tables created in Microsoft SQL Server Management Studio

SQL terms:

DBMS – Database management system.

Normalized – Elimination of redundancy in databases so that all columns depend on a primary key.

RDBMS – Relational database management system.

SQL – Structured Query Language is a standard language for communication with a relational database management system (RDBMS).

Schema. Consists of a library, a journal, a journal receiver, an SQL catalog, and optionally a data dictionary. A schema groups related objects and allows you to find the objects by name.

Table. A set of columns and rows

Row. The horizontal part of a table containing a serial set of columns

Column. The vertical part of a table of one data type.

View. A subset of columns and rows of one or more tables.

Package. An object type that is used to run SQL statements.

SQL statements:

The most common statements used in this report for query writing are as follows, whereas, other statements are just listed as per the category of statements under which they fall.

SELECT statement:

Command

Description

CREATE DATABASE

Creates a new database

CREATE INDEX

Creates a new index on a table column

CREATE SEQUENCE

Creates a new sequence in an existing database

CREATE TABLE

Creates a new table in an existing database

CREATE TRIGGER

Creates a new trigger definition

CREATE VIEW

Creates a new view on an existing table

SELECT

Retrieves records from a table

INSERT

Adds one or more new records into a table

UPDATE

Modifies the data in existing table records

DELETE

Removes existing records from a table

DROP DATABASE

Destroys an existing database

DROP INDEX

Removes a column index from an existing table

DROP SEQUENCE

Destroys an existing sequence generator

DROP TABLE

Destroys an existing table

DROP TRIGGER

Destroys an existing trigger definition

DROP VIEW

Destroys an existing table view

CREATE USER

Adds a new PostgreSQL user account to the system

ALTER USER

Modifies an existing PostgreSQL user account

DROP USER

Removes an existing PostgreSQL user account

GRANT

Grant rights on a database object to a user

REVOKE

Deny rights on a database object from a user

CREATE FUNCTION

Creates a new SQL function within a database

CREATE LANGUAGE

Creates a new language definition within a database

CREATE OPERATOR

Creates a new SQL operator within a database

CREATE TYPE

Creates a new SQL data type within a database

There are four basic types of SQL statements:

Data definition language (DDL) statements

Data manipulation language (DML) statements

Dynamic SQL statements

Miscellaneous statements

Following are the most common and basic SQL statements in detail as per the above written statements:

Data definition language statements (DDL)

ALTER TABLE

CREATE FUNCTION

CREATE INDEX

CREATE PROCEDURE

CREATE SCHEMA

CREATE TABLE

CREATE TRIGGER

CREATE VIEW

DROP TABLE

DROP FUNCTION

DROP INDEX

DROP PROCEDURE

DROP SCHEMA

DROP TRIGGER

DROP VIEW

GRANT FUNCTION

GRANT PROCEDURE

GRANT PROCEDURE

RENAME

REVOKE FUNCTION

REVOKE PROCEDURE

REVOKE TABLE

Data manipulation language (DML) statements

CLOSE

COMMIT

DELETE

FETCH

INSERT

LOCK TABLE

OPEN

REFRESH TABLE

ROLLBACK

SAVEPOINT

SELECT INTO

SET variable

UPDATE

VALUES INTO

Dynamic SQL statements

DESCRIBE

EXECUTE

EXECUTE IMMEDIATE

PREPARE

Miscellaneous Statements

BEGIN DECLARE SECTION

CALL

CONNECT

DECLARE PROCEDURE

DECLARE STATEMENT

DECLARE VARIABLE

DESCRIBE TABLE

DISCONNECT

END DECLARE SECTION

FREE LOCATOR

GET DIAGNOSTICS

HOLD LOCATOR

INCLUDE

RELEASE

SET CONNECTION

SET ENCRYPTION PASSWORD

SET OPTION

SET PATH

SET RESULT SETS

SET SCHEMA

SET TRANSACTION

SIGNAL

WHENEVER

For the creating the tables

For table name stg_typed_trade

CREATE TABLE [dbo].[stg_typed_trade](

[trade_ref] [varchar](50) NULL,

[value_date] [int] NULL,

[currency] [varchar](50) NULL,

[curve] [varchar](50) NULL,

[funding_currency] [varchar](50) NULL,

[pv] [numeric](18, 4) NULL,

[pvxt] [numeric](18, 4) NULL,

[notional] [numeric](18, 4) NULL,

[theta] [numeric](18, 4) NULL

) ON [PRIMARY]

GO

For table name stg_typed_bucketed

CREATE TABLE [dbo].[stg_typed_bucketed](

[trade_ref] [varchar](50) NULL,

[currency] [varchar](50) NULL,

[funding_currency] [varchar](50) NULL,

[value_date] [varchar](50) NULL,

[value_time] [varchar](50) NULL,

[tenor] [varchar](50) NULL,

[curve] [varchar](50) NULL,

[gamma] [numeric](18, 8) NULL

) ON [PRIMARY]

GO

CREATE TABLE [dbo].[stg_untyped_trade](

[trade_ref] [varchar](50) NULL,

[value_date] [varchar](50) NULL,

[currency] [varchar](50) NULL,

[curve] [varchar](50) NULL,

[pv] [varchar](50) NULL,

[pvxt] [varchar](50) NULL,

[notional] [varchar](50) NULL,

[funding_currency] [varchar](50) NULL,

[theta] [varchar](50) NULL

) ON [PRIMARY]

GO

CREATE TABLE [dbo].[stg_untyped_bucketed](

[trade_ref] [varchar](50) NULL,

[currency] [varchar](50) NULL,

[funding_currency] [varchar](50) NULL,

[value_date] [varchar](50) NULL,

[value_time] [varchar](50) NULL,

[tenor] [varchar](50) NULL,

[curve] [varchar](50) NULL,

[gamma] [varchar](50) NULL

) ON [PRIMARY]

GO

CREATE TABLE [dbo].[stg_error_trade](

[trade_ref] [varchar](50) NULL,

[value_date] [varchar](50) NULL,

[currency] [varchar](50) NULL,

[curve] [varchar](50) NULL,

[pv] [varchar](50) NULL,

[pvxt] [varchar](50) NULL,

[notional] [varchar](50) NULL,

[funding_currency] [varchar](50) NULL,

[theta] [varchar](50) NULL,

[ErrorCode] [int] NULL,

[ErrorColumn] [int] NULL

) ON [PRIMARY]

GO

CREATE TABLE [dbo].[stg_error_bucketed](

[trade_ref] [varchar](50) NULL,

[currency] [varchar](50) NULL,

[funding_currency] [varchar](50) NULL,

[value_date] [varchar](50) NULL,

[value_time] [varchar](50) NULL,

[tenor] [varchar](50) NULL,

[curve] [varchar](50) NULL,

[gamma] [varchar](50) NULL,

[ErrorCode] [int] NULL,

[ErrorColumn] [int] NULL

) ON [PRIMARY]

GO

Finance

Swap

A swap is an agreement between two or more parties to exchange sequences of cash flows over a period in the future. Suppose a company may have borrowed money under an adjustable interest rate security such as a mortgage and is now fearful that the interest rate is going to rise. It wants to protect itself against rises in the interest rates without going through the refinancing of the mortgage. The company or individual liable for an adjustable rate looks for someone who will pay the adjustable interest payments in return for receipt of fixed rate payments. This is called a swap. A swaption (option on a swap) gives the holder the right to enter into or the right to cancel out of a swap. (R.Kolb, J. Overdahl, 2007)

There are five generic types of Swaps:

Interest rate swaps

Currency swaps

Credit swaps

Commodity swaps

Equity swaps.

Options

An option that gives the buyer or holder the right but not the obligation to sell shares (or other financial instruments) at a fixed price on or before a given date.

There are two major classes of options: call options and put options.

The owner of a call option has the right to purchase the underlying good at a specific price, and this right lasts until the specific date. The owner of the put option has the right to sell the underlying well at a specific price, and this right lasts until the specific date. (R.Kolb, J. Overdahl, 2007)

Forwards

A forward contract is an agreement between two parties for the delivery of a physical asset (e.g. oil or gold) at a certain time in the future for a certain price, fixed at the inception of the contract. The parties agreeing to the contract are known as counterparties. (R.Kolb, J. Overdahl, 2007)

Futures

A futures contract is a type of forward contract with highly standardized and precisely specified contract terms. As in all forward contract, a futures contract calls for the exchange of some good at a future date for cash, with the payment for the good to occur at that future date. The purchaser of the futures contract undertakes to receive delivery of the good and pay for it, while the seller of a futures contract promises to deliver the good and receive payment. The price of the good is determined at the initial time of the contract. (R.Kolb, J. Overdahl, 2007)

There are several financial risk parameters such as Delta, Theta, Gamma, Vega, and Rho.

John Summa, Ph.D., founder and president of OptionsNerd.com LLC, and TradingAgainstTheCrowd.com, 1998 defines all the above named risk parameters as follows:

Delta 

Delta for individual options, and position Delta for strategies involving combinations of positions, are measures of risk from a move of the underlying price. For example, if you buy an at-the-money call or put, it will have a Delta of approximately 0.5, meaning that if the underlying stock price moves 1 point, the option price will change by 0.5 points (all other things remaining the same). If the price moves up, the call will increase by 0.5 points and the put will decrease by 0.5 points. While a 0.5 Delta is for options at-the-money, the range of Delta values will run from 0 to 1.0 (1.0 being a long stock equivalent position) and from -1.0 to 0 for puts (with -1.0 being an equivalent short stock position). (John Summa, 1998).

Gamma

Delta measures the change in price of an option resulting from the change in the underlying price. However, Delta is not a constant. When the underlying moves so does the Delta value on any option. This rate of change of Delta resulting from movement of the underlying is known as Gamma. And Gamma is largest for options that are at-the-money, while smallest for those options that are deepest in- and out-of-the-money. Gammas that get too big are risky for traders, but they also hold potential for large-size gains. Gammas can get very large as expiration nears, particularly on the last trading day for near-the-money options. (John Summa, 1998).

Read also  Examining The Intel Core I5 Processor Information Technology Essay

Theta

Theta is a measure of the rate of time premium decay and it is always negative (leaving position Theta aside for now). Anybody who has purchased an option knows what Theta is, since it is one of the most difficult hurdles to surmount for buyers. As soon as you own an option (a wasting asset), the clock starts ticking, and with each tick the amount of time value remaining on the option decreases, other things remaining the same. Owners of these wasting assets take the position because they believe the underlying stock or futures will make a move quick enough to put a profit on the option position before the clock has ticked too long. In other words, Delta beats Theta and the trade can be closed profitably. When Theta beats Delta, the seller of the option would show gains. This tug of war between Delta and Theta characterizes the experience of many traders, whether long (purchasers) or short (sellers) of options. (John Summa, 1998).

Vega 

When any position is taken in options, not only is there risk from changes in the underlying but there is risk from changes in implied volatility. Vega is the measure of that risk. When the underlying changes, or even if it does not in some cases, implied volatility levels may change. Whether large or small, any change in the levels of implied volatility will have an impact on unrealized profit/loss in a strategy. Some strategies are long volatility and others are short volatility, while some can be constructed to be neutral volatility. For example, a put that is purchased is long volatility, which means the value increases when volatility increases and falls when volatility drops (assuming the underlying price remains the same). Conversely, a put that is sold (naked) is short volatility (the position loses value if the volatility increases). When a strategy is long volatility, it has a positive position Vega value and when short volatility, its position Vega is negative. When the volatility risk has been neutralized, position Vega will be neither positive nor negative. (John Summa, 1998).

Rho

Rho is a risk measure related to changes in interest rates. Since the interest rate risk is generally of a trivial nature for most strategists (the risk free interest rate does not make large enough changes in the time frame of most options strategies), it will not be dealt with at length in this tutorial. 

When interest rates rise, call prices will rise and put prices will fall. Just the reverse occurs when interest rates fall. Rho is a risk measure that tells strategists by how much call and put prices change as a result of the rise or falls in interest rates. The Rho values for in-the-money options will be largest due to arbitrage activity with such options. Arbitragers are willing to pay more for call options and less for put options when interest rates rise because of the interest earnings potential on short sales made to hedge long calls and opportunity costs of not earning that interest. (John Summa,1998).

Mathematical calculations formula in finance

Simple Interest Future Value

FV = PV * (1 + (i * N ) ]

Where,

FV – future value (or maturity value)

PV – principal or present value

i – interest rate per period

N – number of periods

Simple Interest

I = PV * i * N

Where,

PV – principal or present value

i – interest rate per period

N – number of periods

Compound Interest Future Value

FV = PV * ( 1 + i )N

Where,

PV = present value

FV = future value (maturity value)

i = interest rate in percent per period

N = number of periods

Compounded Interest

i = FV – PV

where,

PV = present value

FV = future value (maturity value)

i = interest rate in percent per period

Annuity

  FV = PMT * [((1 + i )N – 1 ) / i ]

Where,

FV = future value (maturity value)

PMT = payment per period

i = interest rate in percent per period

N = number of periods

Simple Interest Amortized Loan Formula

PV * (1 + i )N = PMT * [ ( 1 + i )N – 1 ] / i

Where,

PMT = the payment per period

i = interest rate in percent per period

PV = loan / mortgage amount

N = number of periods

Present Value:

Present value is the value on a given date of a future payment or series of future payments, discounted to reflect the time value of money and other factors such as investment risk. Present value calculations are widely used in business and economics to provide a means to compare cash flows at different times on a meaningful “like to like” basis.

http://www.frickcpa.com/tvom/images/PVANN_basic_formula.gif

http://www.frickcpa.com/tvom/images/PVANN_basic_formula_legend.gif

Use of BI in Financial Services

BI solutions in the financial services industry have been delivered most effectively within individual lines of business. For financial analysis and customer service, Bi has allowed individual financial products and lines of business to use integrated information. BI is used to consolidate information, support sales and marketing, and manage risk.

Risk Management: Richard Skriletz (Aug 2003) classified risk in financial services in two forms. The first is the risk to lending institutions of default on payments such as mortgage payments. Defaults are a problem because they disrupt cash flow, affect securities issued by the financial institution that are backed by loans and can affect (as one of the rating factors) the rating agencies’ evaluations of the financial strength of the company. BI data mining and statistical modeling are being applied successfully to the problem of managing risk associated with defaults on loans. These tools help to identify patterns in data that correlate with loan defaulters so that the process of underwriting or approving loan applications can screen out applicants with these data patterns.

The second area of risk is the evaluation of the company by rating agencies. Evaluations are meant to provide an assessment of the stability of the financial institution, its ability to fulfill its obligations to customers and its relative (compared to other companies rated) strength for investors. Relative strength is a risk factor because an agency’s rating can affect the cost of capital to the financial institution, and the cost of capital is the “raw material” of financial products and services. It determines the operating margin (the difference between costs and gross revenues) of the company because rates for many financial products and services (such as mortgage rates) are set by the market. A lower cost of capital (due to a better rating) can increase the operating margin and provide an advantage over competitors with lower ratings.

Agencies ask for reports and data to use in their evaluation and monitoring processes. RCG IT clients using BI solutions to consolidate information and monitor risks such as defaults have been able to meet the information needs of rating agencies easily and demonstrate their ability to respond to changes in the business environment. Consider the question, no longer hypothetical, of assessing the impact of a terrorist attack such as that of 9/11 on your business. An effective BI solution can help answer this question quickly and easily.

BI Opportunities in Financial Services

The major opportunity for BI in financial services is clearly for enterprise-wide integration of customer, product, channel and operational data. After all, the financial services business is largely about information, and most financial services companies have used or experimented with BI technology successfully. Such companies should value their information assets and use them to improve products, services and business operations. Too much information is locked in the wide array of application systems including those for departmental and line of business products, customer relationship management, enterprise resource planning, financial management and e-business.

Henry Vlokya, (12 Aug 2009) said that there are challenges to creating a truly integrated, enterprise-wide information environment in financial services companies. There are decision-makers in large financial services organizations that believe that their data volumes are too great, their information processing needs too specialized, and their end users too set in their ways for a BI solution to be effective. Here they should be encouraged to look at the retail industry where many BI best practices have been developed. In retail, the volume of transactions processed, the wide range of BI analytics used and the variety of end users who employ BI results are all greater than those found in financial services.

The challenges are not technical. Rather, organizational issues present the greatest challenge for financial services companies. In particular, the management culture of financial services companies is focused on financial performance and profit. This culture comes from the entrepreneurial moneymaking that is at the heart of the financial services industry. Currently, it is the rare executive who is willing to invest in an enterprise-wide solution and all that it entails; but there are more such executives appearing every day. (Justin Aucoin , 17 Sep 2009)

RCG IT is working with financial services companies that are creating a centralized, integrated information facility to perform in-depth customer analytics. The intents vary, including developing a better understanding of customer behavior, providing better performing cross-selling capabilities and identifying new opportunities to improve products and services. In each of these cases, the drive for enterprise-wide information integration comes from a high-level executive.

One financial services client, who was dissatisfied with the results of its five-year experience with in-house developed BI, outsourced its BI function. The company did this in order to get results needed from its operationally mission-critical use of BI without continuing to invest in learning and experimentation with the technology. As BI solutions become as important to the financial services industry as they are in retail and other industries, more executives will be making decisions to integrate information on an enterprise-wide basis and ensure that the right skills and experiences are in place to deliver high-value business results.

References

Inmon W.H. (2002) Building the Data Warehouse. 3rd  Edition. n.p. : John Wiley and Sons, Inc. 

Kimball, R. and Ross, M. (2002) The Data Warehouse Toolkit: The complete guide to dimensional 

modeling. 2nd edition. New York. John Wiley and Sons Inc. 

Larson, B. (2006) Delivering Business Intelligence with Microsoft SQL Server 2005. California: 

McGraw‐Hill/Osborne.

Luhn, H.P. (1958)  A Business Intelligence System. IBM Journal. n.v. 

Moss, L.T., Atre, S. (2003) Business intelligence roadmap : the complete project lifecycle for 

decision‐support applications. Boston: Pearson Education, Inc. 

John Summa, (2001) Options on Futures: New Trading Strategies and Options on Futures Workbook. John Wiley and sons publications.

John Summa, (2004) Trading Against the Crowd: Profiting From Fear and Greed in Stock, Futures and Options Markets. John Wiley and sons publications.

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)