Estimation Of Dynamic Consumption Function For Nigeria Economics Essay

This project estimates the dynamic consumption function for the country Nigeria from 1969 to 2003. In the study, I focus on the linear dynamic specification in order to discuss the parameter estimates in carry out various tests. The first chapter focus on the consumption function theory and how various economy view various theory to show the relationship between income and consumption and how it can be specified in both linear and non linear form. The next chapter describe the data how it was obtained processed through Eviews and how it was interpreted to fit into the project.

The next chapter is hypothesis testing which is used to describe the significance of the speculation and structure the relationship that determines dependent variables in consumption. The t-test and F test is used to measure the proportion in the dependent variables. it also explain the significant testing where we reject or Do not reject null hypothesis. It further estimates the long run and short run marginal propensity to consume and how they affect consumption. it also explain some test such as Ramsey RESET test, Heteroskadasticity test, LM test, for serial correlation and test for their misspecification.

CHAPTER 2 : LITERATURE REVIEW

This chapter focus on the way the total disposable income and total consumption and savings are divided by a consumer.

The consumption function is a mathematical formula laid out by the famous john Maynard Keynes. The formula was designed shown the relationship between real disposal income and consumer spending. The function is used to calculate the amount of total consumption in an economy. An autonomous consumption is used to make up for consumption that is not influenced by current income and induced consumption that is influenced by the economy’s level of income.

The simple consumption function is shown as the linear function:

Where:

C = total consumption,

c0 = autonomous consumption (c0 > 0),

c1 is the marginal propensity to consume (ie the induced consumption) (0 < c1 < 1), and

Yd = disposable income (income after taxes and transfer payments, or W – T).

Autonomous consumption represents consumption when income is zero. In estimation, this is usually assumed to be positive. The marginal propensity to consume (MPC), on the other hand measures the rate at which consumption is changing when income is changing. In a geometric fashion, the MPC is actually the slope of the consumption function.

The MPC is assumed to be positive. Thus, as income increases, consumption increases. However, Keynes mentioned that the increases (for income and consumption) are not equal. According to him, “as income increases, consumption increases but not by as much as the increase in income”.

There are four approaches to consumption function and they are:

Absolute income hypothesis (Keynes)

Relative income hypothesis (James Duesenberry 1949)

Permanent income hypothesis (Friedman)

Life Cycle hypothesis (Ando&Modigliani)

 Absolute Income Hypothesis In economics was proposed by English economist John Maynard Keynes (1883-1946), and has been refined extensively during the 1960s and 1970s, notably by American economist James Tobin (1918-2002).

The theory examines the relationship between income and consumption, and asserts that the consumption level of a household depends not on its relative income but on its absolute level of income. As income rises, the theory asserts, consumption will also rise but not necessarily at the same rate.

Developed by James Stemble Duesenberry, the relative income hypothesis states that an individual’s attitude to consumption and saving is dictated more by his income in relation to others than by abstract standard of living. So an individual is less concerned with absolute level of consumption than by relative levels. The percentage of income consumed by an individual depends on his percentile position within the income distribution.

Secondly it hypothesizes that that present consumption is not influenced merely by present levels of absolute and relative income, but also by levels of consumption attained in previous period. It is difficult for a family to reduce a level of consumption once attained. The aggregate ratio of consumption to income is assumed to depend on the level of present income relative to past peak income.

CD = a = bYt

The current level of consumption is a straightforward function, driven by the current level of income. This implies that people adapt instantaneously to income changes.

– There is rapid adaptation to income changes

– The elasticity of consumption to current income changes

The elasticity with respect to current income in other theories will be less. They reduce the sensitivity to current income flows.

Permanent income hypothesis (PIH) This theory of consumption that was developed by the American economist Milton Friedman. In its simplest form, the hypothesis states that the choices made by consumers regarding their consumption patterns are determined not by current income but by their longer-term income expectations. The key conclusion of this theory is that transitory, short-term changes in income have little effect on consumer spending behavior.

Measured income and measured consumption contain a permanent (anticipated and planned) element and a transitory (windfall gain/unexpected) element. Friedman concluded that the individual will consume a constant proportion of his/her permanent income; and that low income earners have a higher propensity to consume; and high income earners have a higher transitory element to their income and a lower than average propensity to consume.

In Friedman’s permanent income hypothesis model, the key determinant of consumption is an individual’s real wealth, not his current real disposable income. Permanent income is determined by a consumer’s assets; both physical (shares, bonds, property) and human (education and experience). These influence the consumer’s ability to earn income. The consumer can then make an estimation of anticipated lifetime income. He also explains why there was no drop collapse in spending post WWI. Friedman argues that it would be more sensible for people to use current income, but also at the same time to form expectations about future levels of income and the relative amounts of risk.

Thus, they are forming an analysis of “permanent income.”

Permanent Income = Past Income + Expected Future Income

Transitory Income – income that is earned in excess of, or perceived as an unexpected windfall. If you get income not equal to what you expected or to what you don’t expect to get again.

So, he argues that we tend to spend more out of permanent income than out of transitory.

In the Friedman analysis, he treats people as forming their level of expected future income based on their past incomes. This is known as adaptive expectations.

Adaptive Expectations – looking forward in time using past expectations. In this case, we use a distributed lag of past income.

YPt+1 ® E(Yt+1) = B0Yt + B1Yt-1 + B2Yt-2…

Where B0 > B1 > B2

It is also possible to add a constraint: B0 + B1 + B2 + B3 + … Bn = 1 this is expected income, the actual income can be thought of as: Yt-1 – Ypt+1 = Ytt+1 Using this, we can construct a new model of the consumption function:   Ct = a = bYDt + cYtt

There are other factors that people can look at to think about future levels of income. For example, people can think about future interest rates and their effect on their income stream.

The Relative Income Hypothesis The Duesenberry approach says that people are not just concerned about absolute levels of possession. They are in fact concerned about their possessions relative to others. People are not necessarily happier if they have more money. They do however report higher happiness if they have more relative to others. The new utility function would be:

Current economists still support this idea. Ex: Robert Frank and Juliet Schor

Duesenberry argues that we have a greater tendency to resist spending decreases relative to falls in income than we do to increase expenditure relative to increases of income. The reason is that we don’t want to alter our standard of living downard.

CT = a +bYT + cYX

YX is the previous peak level of income (this keeps expenditure from falling in the face of income drops). It is also known as the Drag Effect.

A shift in expenditures relative to a previous level of income is known as the Ratchet Effect, and will be shown below.

Duesenberry argues that we will shift the curve up or move along the curve, but not we will resist shifts down. When WWII ended, a significant number of economists claimed that there would be a consumption decline and aggregate demand drop which did not occur. This provides supporting evidence.

A long-run consumption function can be drawn, assuming that there is a growth trend. If this is true, previous peak income would have been that of last year and thus would give a consumption function that looks like it depends on current income.

 The Life Cycle Hypothesis

This is primarily attributed to Ando and Modigliani

The basic notion is that consumption spending will be smooth in the face of an erratic stream of income

Working Phase:

Maintain current consumption, pay off debt from youth years

Maintain current consumption, build up reserves

Age distribution now matters when we look at consumption, and in general, the propensity to consume. Debt and wealth are also taken into account when we look at the propensity to consume. The dependence structure of the population will affect or influence consumption patterns.

Lester Thurow (1976) – argued that this model doesn’t work because it doesn’t presume there is any motive for building wealth other than consumption. Thurow argues that their real motivation is status and power (both internal and external to the family). The permanent income hypothesis bears a resemblance to the life-cycle hypothesis in that in some sense, in both hypotheses, the individuals must behave as if they have some sense of the future.

CHAPTER 3: DATA

This chapter explain the data used in this project. The country used here is Nigeria and the data is downloaded from the IMF International Financial Statistics. The country table is chosen from Annual IFS series via beyond 20/20 WDS. From the Nigeria Annual IFS series the following variables were selected: 96F.CZF HOUSEH.CONS.EXPEND. INCL.NPISHS SA (Units: National Currency)

(Scale: Billions) = NCONS

64…ZF CPI: ALL ITEMS (2000=100) (Units: Index Number) = CPI

99I.CZF GROSS NATIONAL DISPOSABLE INCOME SA (Units: National Currency)

(Scale: Billions) = NYD

99BIRZF GDP DEFLATOR (2000=100) (Units: Index Number) = GDPdef ns) = NCONS

64…ZF CPI: ALL ITEMS (2000=100) (Units: Index Number) = CPI

99I.CZF GROSS NATIONAL DISPOSABLE INCOME SA (Units: National Currency)

(Scale: Billions) = NYD

99BIRZF GDP DEFLATOR (2000=100) (Units: Index Number) = GDPdef

Then click on show tables, from Download Icon, select Microsoft excel format.

From Excel, I did convert nominal series to real series by using a method called deflating by price Index.

Rcons, which is the Real Consumption expenditure series for USA, is obtain by dividing the NCONS (96F.CZF…): Household consumption expenditure by CPI (64…ZF)

Rcons

RYD is the Real Disposable Income for USA, which is obtain by deflating the Nominal Disposable Income (NYD) 99I.CZF by the GDP deflator (99BIRZF) as,

Read also  Minimum Wage And Spill Over Effects Economics Essay

RYD=

Then the results are saved in a new spreadsheet and are then sent in EViews.

CHAPTER 4: ESTIMATED RESULT AND INTERPRETATION

This chapter reports the ordinary least squares result used for Nigeria between 1969 and 2003 specified both linear and log linear forms.

Interpretation of the intercept is explained as when the data series lies far from the origin of the X,Y plot of the variable, estimates of the will not have a meaningful economy theory interpretation.

Marginal propensity and Elasticities.

————- Linear specification

Estimated coefficient are marginal propensities, the proportion of a (N) income that will spent on consumption.

Where N is the unit of measurement

———— Log linear specification.

The estimated coefficient from a linear specification are not marginal propensities, they are elasticities because they measure the responsiveness of the independent variable changes in the independent variables. Thus, elasticities of 0.70 will indicate that 70% of change in independent variable will increase the dependent by 70%.

Elasticity is very good because it tends to relate to range while the propensities tends to relate to the average.

Reporting of dynamic linear specification:

Dynamic linear specification: rcons c ryd rcons(-1)):

Dynamic linear equation:

Report of dynamic linear specification:

{4.506} {0.20} {0.74} [Coefficient]

{5.375} {0.14} {0.18} [Std. Error]

{0.838} {1.38} {4.10} [t-Statistic]

{0.408} {0.18} {0.00} [Prob]

Reporting of dynamic log linear specification:

Dynamic log linear specification: log(rcons) c log(ryd) log(rcons(-1))

Dynamic log linear equation:

Report of dynamic log linear specification

{0.951} {0.163} {0.598} [Coefficient]

{0.380} {0.097} {0.164} [Std. Error]

{2.501} {1.677} {3.638} [t-Statistic]

{0.018} {0.104} {0.001} [Prob]

Testing that the MPC of coefficients of Ryd and and Zero.

4.506 + 0.20 Ryd + 0.74

{5.375} {0.14} {0.18}

H0 : β= 0

H1 : β ≠ 0

tstatistics= ==1.43

tcrit = tn-k,α/2

Where,

n=34

k=3,

=5%

t=34-3, 0.05/2 = t 31, 0.025

FIGURE 1.5

0.025

0.025

We will therefore not reject the null hypothesis that MPC coefficient of Ryd () is equal to zero because 1.432.042 which also means that ordinary least squares of is not statistically significant to zero.

4.506 + 0.20 Ryd + 0.74

{5.375} {0.14} {0.18}

H0 : = 0

H1 : ≠ 0

T-statistics= ==4.11

tcrit = tn-k,α/2

Where n=34, k=3,=5%

t=34-3, 0.05/2=t 31, 0.025

We will therefore reject the null hypothesis that MPC of the coefficient of Ryd () is equal to zero because 4.112.042 which also means that ordinary least squares of is statistically significant to zero.

Testing that the Elasticities of coefficients of Ryd and and Zero.

H0 : β= 0

H1 : β ≠ 0

t-statistics= ==1.43

tcrit = tn-k,α/2

where n=34, k=3,=5%

t=34-3, 0.05/2=t 30,0.025

0.95

0.025

0.025

We will therefore not reject the null hypothesis that MPC coefficient of Ryd () is equal to zero because 1.432.042 which also means that ordinary least squares of is not statistically significant to zero.

4.506 + 0.20 Ryd + 0.74

{5.375} {0.14} {0.18}

H0 : = 0

H1 : ≠ 0

T-statistics= ==4.11

tcrit = tn-k,α/2

Where n=34, k=3,=5%

t=34-3, 0.05/2=t 30, 0.025

CHAPTER 5: TESTING THE SIGNIFICANCE OF THE ESTIMATED

PARAMETER.

A statistical hypothesis test is a method of making statistical decisions using experimental data. In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. The phrase “test of significance” was coined by Ronald Fisher: “Critical tests of this kind may be called tests of significance, and when such tests are available we may discover whether a second sample is or is not significantly different from the first.

Hypothesis testing is sometimes called confirmatory data analysis, in contrast to exploratory data analysis. In frequency probability, these decisions are almost always made using null-hypothesis tests; that is, ones that answer the question Assuming that the null hypothesis is true, what is the probability of observing a value for the test statistic that is at least as extreme as the value that was actually observed? One use of hypothesis testing is deciding whether experimental results contain enough information to cast doubt on conventional wisdom.

Statistical hypothesis testing is a key technique of frequents statistical inference, and is widely used, but also much criticized. The main direct alternative to statistical hypothesis testing is Bayesian inference. However, other approaches to reaching a decision based on data are available via decision theory and optimal decisions.

The critical region of a hypothesis test is the set of all outcomes which, if they occur, will lead us to decide that there is a difference. That is, cause the null hypothesis to be rejected in favor of the alternative hypothesis. The critical region is usually denoted by C.

Null Hypothesis is a phrase that was originally coined by English geneticist and statistician Ronald Fisher. In statistical hypothesis testing, the null hypothesis (H0) formally describes some aspect of the statistical “behaviour” of a set of data. This description is assumed to be valid unless the actual behaviour of the data contradicts this assumption. Thus, the null hypothesis is contrasted against another or alternative hypothesis. Statistical hypothesis testing, which involves a number of steps, is used to decide whether the data contradicts the null hypothesis. This is called significance testing. A null hypothesis is never proven by such methods, as the absence of evidence against the null hypothesis does not establish its truth. In other words, one may either reject, or not reject the null hypothesis; one cannot accept it. This means that one cannot make decisions or draw conclusions that assume the truth of the null hypothesis. Just as failing to reject it does not “prove” the null hypothesis, one does not conclude that the alternative hypothesis is disproven or rejected, even though this seems reasonable. One simply concludes that the null hypothesis is not rejected. Not rejecting the null hypothesis still allows for getting new data to test the alternative hypothesis again. On the other hand, rejecting the null hypothesis only means that the alternative hypothesis may be true, pending further testing.

Dynamic linear equation:

Test the overall significance of Dynamic linear regression model consumption function.

4.506 + 0.20 Ryd + 0.74

{5.375} {0.14} {0.18}

R2=0.70

N=34

Ho: β = γ = 0

H1: β ≠ γ ≠ 0

0.05

0.95

We will therefore reject the null hypothesis that MPC coefficients of Ryd () and () is equal to zero because Fs>Fc (36.16 >3.32) which also means that ordinary least squares of is statistically significant, that is, it is not zero.

Intercept is not statistically significant at the 5% level in any case; in estimation the intercept has no economic meaning.

Interpretation — scale — factor.

The coefficient on RYD and the lagged dependent variable (LDV) statistically significant at 5%

The Regression on coefficient of the determination R2

R2 is a measure of the proportion of the variation in the dependent variable that is explained by the entire variable in the equation.

In statistics, the coefficient of determination, R2 is used in the context of statistical models whose main purpose is the prediction of future outcomes on the basis of other related information. It is the proportion of variability in a data set that is accounted for by the statistical model. It provides a measure of how well future outcomes are likely to be predicted by the model.

There are several different definitions of R2 which are only sometimes equivalent. One class of such cases includes that of linear regression. In this case, R2 is simply the square of the sample correlation coefficient between the outcomes and their predicted values, or in the case of simple linear regression, between the outcome and the values being used for prediction. In such cases, the values vary from 0 to 1. Important cases where the computational definition of R2 can yield negative values, depending on the definition used, arise where the predictions which are being compared to the corresponding outcome have not derived from a model-fitting procedure using those data.

R2 is expressed as follows:

R2 = ESS/TSS = 1 – RSS/TSS = 1 -∑e2i /∑ (Yi – Y-) 2

Where; ESS= Explained sum of squares

TSS= Total sum of squares

RSS=Residuals sum of squares

Therefore R2 in dynamic linear regression model is explained as follows.

R2 is the coefficient of determination. It is also used to measure to measure the goodness of fit. It is also used to tell us how close the data point are to the fitted function that is 0< R2 < 1. More precisely 0.70 in dynamic linear regression model consumption function tell us the 70% of the variability in Rconst can be explained by the variability in the explanatory variables(Ryd and Rconst-1).

ESTIMATION OF LONG RUN AND SHORT RUN MARGINAL PROPENSITY TO CONSUME

The two different period of MPC. The first one is the Short Run Marginal consume and it shows us the marginal propensity to consume. This shows the marginal propensity to consume for one period of time.

This simple indicates that a 1 unit change in disposable income would have on consumption in the same period.

The second shows in the Long Run Marginal Propensity to Consume, it takes into consideration recent consumption behaviour, as well as disposable income when determining the level of consumption.

By taking into consideration the previous consumption, and the current income, it allows you to assess what effect the previous could probably have on consumption.

Long run =

Long Run is obtained through the use of steady state consumption. We can assume that

it allows dynamic equation to be written as

Taking to Left Hand Side we get

When C is factored out it is written as:

When solving for C we obtain

C =

So using the SSQ we get the Long Run where C is the consumption and is being determine by the Long run Marginal Propensity to Consume of the income variable.

CHAPTER 6: THE RAMSEY RESET TEST

Ramsey Regression Equation Specification Error Test (RESET) test (Ramsey, 1969) is a general specification test for the linear regression model. More specifically, it tests whether non-linear combinations of the estimated values help explain the endogenous variable. The intuition behind the test is that, if non-linear combinations of the explanatory variables have any power in explaining the endogenous variable, then the model is mis-specified. The RESET test is designed to detect omitted variables and incorrect functional form.

The Ramsey test then tests whether (β1x)2,(β2x)3…,(βk − 1x)k has any power in explaining y. This is executed by estimating the following and then testing, by a means of a F-test whether through are zero. If the null-hypothesis that all regression coefficients of the non-linear terms are zero is rejected, then the model suffers from mis-specification.

For a univariate x the test can also be performed by regressing on the truncated power series of the explanatory variable and using an F-Test for

Test rejection implies the same insight as the first version mentioned above.

The F-test compares regressions, the original one and the Ramsey’s auxiliary one, as done with the evaluation of linear restrictions. The original model is the restricted one opposed to the Ramsey’s unrestricted model.

F(k − 1,n − k), where:

Read also  Decolonization Of Africa And Its Economic Impact

n is the sample size;

k is number of parameters in the in the Ramsey’s model.

Furthermore, the linear model and the model with the non-linear power terms are subjected to the F-test, similarly as before:

~ F(k − 1,n − m − k) ,

where m + k is number of parameters in the Ramsey’s model, which are k − 1 variables in the Ramsey group (non-linear )

plus m + 1 the number of parameters in the original model

Test for misspecification.

Rejection of Ho implies the original model is inadequate and can be improved. A failure to reject H0 says the test has not been able to detect any misspecification.

Overall, the general philosophy of the test is: If we can significantly improve the model by artificially including powers of the predictions of the model, then the original model must have been inadequate.

€ 

Estimating this model, and then augmenting it with squares of the predictions, and squares and cubes of then predictions, yields the RESET test results The F-values are quite small and their corresponding p-values of 0.93 and 0.70 are well above the conventional significance level of 0.05. There is no evidence from the RESET test to suggest the log-log model is inadequate.

CHAPTER 7: BREUSCH- GODFERYSERIAL CORRELATION LM TEST

The Breusch – Godfery Lagrange Multiplier (BGLM) Test. computes the Breusch (1978)-Godfrey (1978) Lagrange multiplier test for no independence in the error distribution. For a specified number of lags p, the test’s null of independent errors has alternatives. The test statistic, a T R^2 measure, is distributed Chi-squared (p) under the null hypothesis.

In statistics, the BG serial correlation LM test is a big test for autocorrelation in the residuals from a regression analysis and is considered more general than the standard Durbin-Watson statistic. The null hypothesis is that there is no serial correlation of any order up to p.un like the Durbin-Watson h statistic is only valid for nonstochastic regressors and first-order autoregressive schemes, the BG test has none of these restrictions, and is statistically more powerful than Durbin’s h statistic.

Characteristics of Breusch-Godfrey Test:

Allows for a relationship between ut and several of its lags

Estimate a regression and obtain residuals

Regress residuals on all regressors and lagged residuals

Obtain R2 from this regression

Letting T denote the number of observations.

The problem of time series is often a problem when the stochastic term in one period is not the independent in another.

Serial correlation does not affect the unbiasedness of the properties of ordinary least squared but it does affect the minimum variance hence, the estimated efficiency.

The LM test uses the form that in order to see if the estimated residual from the restricted forms is related to the lagged values of itself then

If serial correlation does not exist, the then the regresses of the unrestricted form and restricted form are the same.

There two forms of the test reported and they are:

(1) Chi-squared (£2): the test statistics is calculated as TR2, where T is the number of observations in the original regression and R2 is the R-squared in the auxiliary regression. This has a£2 (h) distribution for h restrictions (lags). So where h=5.

If TR2 < (£2): (h) 0.05 then we do not reject the null of no autocorrelation (at the 5% Significance level).

If TR2 > £2 (h) 0.05 then we must reject the null of no autocorrelation

(At the 5% Significance level).

(2) F Test

The test statistic is calculated as Fcal = (T-k-1-h) R2/h (1-R2), where k is the number of regressors in the original equation (here k = 3). This has an F (h, T-k-1-h) distribution.

If Fcal < F tables, 0.05 then we do not reject the null of no autocorrelation (at the 5%

Significance level).

If Fcal > F tables, 0.05 then we must reject the null hypothesis of no autocorrelation (at the 5% significance level)

The Chi- squared is going to be used in carrying out the test below

CHAPTER 8: HETEROSKADASTICITY TEST.

.

HETEROSKEDASTICITY

One of the assumptions of the classical linear regressions (CLR) model is that the error term in the linear model is drawn from a distribution with a constant variance when this is the case, it is said that the errors are homoskedastic. In a situation where this assumption does not hold, then the problem of heteroskedasticity occurs.

Heteroskedasticity, as a violation of the assumptions of the CLR model, causes the OLS estimates to lose some of their nice model.

Heteroskadasticity is likely to take place in a cross- sectional model than time series model.

Heteroskedasticity causes OLS to underestimate the variances and standard errors of the estimated coefficients

This implies that the t-test and F-test are not reliable

The t-statistics tend to be higher leading us to reject a null hypothesis that should not be rejected

F-statistic follows an F distribution with k degrees of freedom in the numerator and (n – k -1) degrees of freedom in the denominator

Reject the null hypothesis that there exists no heteroskedasticity if the F-statistic is greater than the critical F-value at the selected level of significance.

If the null cannot be rejected, then there exists heteroskedasticity in the data and an alternative estimation method to OLS must be followed.

Testing for heteroskadasticity

When testing for heteroskadasticity, there different ways to go because heteroskadasticy takes a number of different forms and it exact manifestation in a given equation is almost known.

The main focus will be on the white test which is more generally used than the other tests.

The white test is used to detect the heteroskadasticity by running a regression with the squared residuals as the dependent variable. In the right hand side of the second equation all the original independent variables, the squared of all the original independent variables with each other and the cross product of the entire original test with each other. The white test run the advantage of not assuming any particular form of heteroskadasticity which make it popular as one of the best tests yet to devised to apply to all types of heteroskadasticity.

F-statistic follows an F distribution with k degrees of freedom in the numerator and (n – k -1) degrees of freedom in the denominator

Reject the null hypothesis that there exists no heteroskedasticity if the F-statistic is greater than the critical F-value at the selected level of significance.

If the null cannot be rejected, then there exists heteroskedasticity in the data and an alternative estimation method to OLS must be followed.

Testing for heteroskadasticity

When testing for heteroskadasticity, there different ways to go because heteroskadasticy takes a number of different forms and it exact manifestation in a given equation is almost known.

The main focus will be on the white test which is more generally used than the other tests.

The white test is used to detect the heteroskadasticity by running a regression with the squared residuals as the dependent variable. In the right hand side of the second equation all the original independent variables, the squared of all the original independent variables with each other and the cross product of the entire original test with each other. The white test run the advantage of not assuming any particular form of heteroskadasticity which make it popular as one of the best tests yet to devised to apply to all types of heteroskadasticity.

This is a White test for heteroskedasticity with cross-products.

The test statistics is a with procedure (where h = k-1 from the auxiliary equation (in this case h=6-1=5). We obtain critical values for the test by using the tables. For instance doing the test at the 5% significant level with h=5, five degrees of freedom, would imply a test statistic of.

With the test critical value available we would then calculate the test statistic and this is from the auxiliary equation. So from the auxiliary equation we had T=34 observations and from the one we obtained an R2 = 0.374449, the test statistics would be

The test statistics NR2 = TR2 = 34(0.374449) = 12.731266

Since the we will then reject the null hypothesis homoskedasticity

CHAPTER 10: STATIONERITY, RANDOM WALK AND COINTEGRATION

A common assumption in many time series techniques is that the data are stationary.

A stationary process has the property that the mean, variance and autocorrelation structure do not change over time. Stationarity can be defined in precise mathematical terms, but for our purpose we mean a flat looking series, without trend, constant variance over time, a constant autocorrelation structure over time and no periodic fluctuations.

Non-stationary data are unpredictable and cannot be modelled or forecasted. The results obtained by using non-stationary time series may be spurious in that they may indicate a relationship between two variables where one does not exist. In order to receive consistent, reliable results, the non-stationary data needs to be transformed into stationary data. In contrast to the non-stationary process that has a variable variance and a mean that does not remain near, or returns to a long-run mean over time, the stationary process reverts around a constant long-term mean and has a constant variance independent of time.

Transformations to Achieve Stationarity

If the time series is not stationary, we can often transform it to stationarity with one of the following techniques.

We can difference the data. That is, given the series Zt, we create the new series

The differenced data will contain one less point than the original data. Although you can difference the data more than once, one difference is usually sufficient.

If the data contain a trend, we can fit some type of curve to the data and then model the residuals from that fit. Since the purpose of the fit is to simply remove long term trend, a simple fit, such as a straight line, is typically used.

For non-constant variance, taking the logarithm or square root of the series may stabilize the variance. For negative data, you can add a suitable constant to make all the data positive before applying the transformation. This constant can then be subtracted from the model to obtain predicted that is, the fitted values and forecasts for future points.

RANDOM WALK

A random walk is defined as a process where the current value of a variable is composed of the past value plus an error term defined as a white noise (a normal variable with zero mean and variance one).

Algebraically a random walk is represented as follows:

The implication of a process of this type is that the best prediction of y for next period is the current value, or in other words the process does not allow predicting the change. That is, the change of y is completely random.

It can be shown that the mean of a random walk process is constant but its variance is not. Therefore a random walk process is non stationary, and its variance increases with t.

In practice, the presence of a random walk process makes the forecast process very simple since all the future values of, for s > 0, is simply.

A random walk model with drift

A drift acts like a trend, and the process has the following form:

Read also  A FINANCIAL ANALYSIS OF SAINSBURYS

For > 0 the process will show an upward trend.

Assuming a = 1. This process shows both a deterministic trend and a stochastic trend. Using the general solution for the previous process,

Where is the deterministic trend and is the stochastic trend.

The relevance of the random walk model is that many economic time series follow a pattern that resembles a trend model. Furthermore, if two time series are independent random walk processes then the relationship between the two does not have an economic meaning. If one still estimates a regression model between the two the following results are expected:

(a) High R2

(b) Low Durbin-Watson statistic (or high)

(c) High t ratio for the slope coefficient

This indicates that the results from the regression are spurious. A regression in terms of the changes can provide evidence against previous spurious results. If the coefficient of a regression

∆∆

is not significant, then this is an indication that the relationship between y and is spurious, and one should proceed by selecting other explanatory variable.

If the Durbin-Watson test is passed then the two series are co- integrated, and a regression between them is appropriate.

COINTEGRATION

Cointegration theory is definitely the innovation in theoretical econometrics that has created the most interest among economists.. The definition in the simple case of 2 time series xt and yt, that are both integrated of order one (this is abbreviated I(1), and means that the process contains a unit root), is the following:

Definition

xt and yt are said to be cointegrated if there is existence of a parameter α such that

is a stationary process.

This turns out to be a path breaking way of looking at time series, because it seems that lots of economic series behaves that way and because this is often predicted by theory.

The first thing to notice is that economic series behave like I(1) processes, i.e. they

seem to drift all over the place, but the second thing to notice is that they seem to drift in

such a way that the they do not drift away from each other. If this is formulated statistically

it comes up with the cointegration model.

TESTING FOR RANDOM WALK

A unit root test is usually carried out by using the regression test introduced by Dickey and Fuller (1979). Under the null hypothesis the series should be a random walk. But a non-stationary series can usually be decomposed into a random walk and a stationary component.

The Dickey-Fuller Unit Root Test for Non-stationarity

A Dickey-Fuller test is an econometric test for whether a certain kind of time series data has an autoregressive unit root. In particular in the time series econometric model y[t] = by[t-1] + e[t], where t is an integer greater than zero indexing time, and b=1, let bOLS denote the OLS estimate of b from a particular sample. Let T be the sample size. Then the test statistic T*(bOLS -1) has a known, documented distribution. Its value in a particular sample can be compared to that distribution to determine a probability that the original sample came from a unit root autoregressive process; that is, one in which b=1.

Augmented Dickey Fuller test

An augmented Dickey-Fuller test is a test for a unit root in a time series sample. An augmented Dickey-Fuller test is a version of the Dickey-Fuller test for a larger and more complicated set of time series models. The augmented Dickey-Fuller (ADF) statistic, used in the test, is a negative number. The more negative it is, the stronger the rejections of the hypothesis that there is a unit root at some level of confidence.

APPENDICES

APPENDIX 1

NIGERIAN (SEPTEMBER 2009) ANNUAL IFS SERIES

Time

64…ZF CPI:ALL INC. IN URBAN/RURAL AREAS (Units: Index Number)

96F..ZF HOUSEH.CONS.EXPEND.,INCL.NPISHS (Units: National Currency) (Scale: Billions)

99A..ZF GROSS NATIONAL INCOME (GNI) (Units: National Currency) (Scale: Billions)

99BIPZF GDP DEFLATOR (2003=100) (Units: Index Number)

1969

0.142026

2.901

3.682

0.381415

1970

0.161565

4.143

5.125

0.576998

1971

0.187414

5.09

6.853

0.584776

1972

0.193894

5.267

7.133

0.601693

1973

0.204369

6.903

10.578

0.324827

1974

0.230272

10.962

18.376

0.463547

1975

0.308482

13.689

21.559

0.559422

1976

0.383443

16.297

27.298

0.629019

1977

0.441296

19.061

32.272

0.682775

1978

0.537098

24.341

35.61

0.790858

1979

0.599991

25.928

42.535

0.918183

1980

0.659824

31.695

49.759

1.0211

1981

0.797151

34.563

49.839

1.12529

1982

0.858514

36.284

50.547

1.15585

1983

1.0578

41.457

56.168

1.3435

1984

1.2463

47.962

62.009

1.57577

1985

1.33897

54.066

70.732

1.63874

1986

1.41552

56.204

68.682

1.60447

1987

1.57533

78.329

97.225

2.40248

1988

2.43407

113.073

132.503

2.91572

1989

3.66246

138.828

207.173

4.20235

1990

3.93218

155.274

238.27

4.49293

1991

4.44364

222.27

299.536

5.33186

1992

6.425

404.182

485.404

8.79247

1993

10.0979

537.473

627.911

10.9303

1994

15.8569

694.053

849.231

14.0805

1995

27.4063

1543.09

1773.65

29.7889

1996

35.4276

2367.96

2612.84

40.9551

1997

38.4496

2434.62

2713.96

41.2998

1998

42.2931

2757

2705.42

39.5653

1999

45.0923

1969.37

3082.41

44.3461

2000

48.2186

2446.54

4618.55

64.1422

2001

57.3193

3642.58

4517.46

60.105

2002

64.7

5540.19

5216.64

84.6155

2003

73.7786

7044.54

6763.74

100

APPENDIX 2

Time

RCONS

RYD

1969

20.42584

9.653527

1970

25.64293

8.88218

1971

27.15912

11.71902

1972

27.16433

11.85488

1973

33.77714

32.56503

1974

47.60457

39.64215

1975

44.37536

38.53799

1976

42.50175

43.39774

1977

43.19323

47.26594

1978

45.31948

45.02705

1979

43.21398

46.32519

1980

48.03554

48.73078

1981

43.35816

44.28992

1982

42.26373

43.73145

1983

39.19172

41.80722

1984

38.48351

39.35156

1985

40.3788

43.16243

1986

39.70555

42.80666

1987

49.72228

40.4686

1988

46.45429

45.44435

1989

37.90567

49.29932

1990

39.48802

53.03221

1991

50.0198

56.17852

1992

62.9077

55.20678

1993

53.22622

57.44682

1994

43.76978

60.31256

1995

56.30421

59.54063

1996

66.83941

63.79767

1997

63.31977

65.71364

1998

65.18794

68.3786

1999

43.6742

69.50803

2000

50.73851

72.00486

2001

63.54893

75.15947

2002

85.6289

61.65112

2003

95.48216

67.6374

APPENDIX 3

LINEAR REGRESSION MODEL CONSUMPTION FUNCTION

Linear specification: rcons c ryd

Linear equation: rcons = α + βryd

Dependent Variable: RCONS

Method: Least Squares

Date: 10/26/09 Time: 14:23

Sample: 1969 2003

Included observations: 35

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

C

15.33550

5.088069

3.014011

0.0049

RYD

0.680475

0.100993

6.737814

0.0000

R-squared

0.579072

    Mean dependent var

47.60036

Adjusted R-squared

0.566316

    S.D. dependent var

15.44940

S.E. of regression

10.17415

    Akaike info criterion

7.533023

Sum squared resid

3415.940

    Schwarz criterion

7.621900

Log likelihood

-129.8279

    Hannan-Quinn criter.

7.563703

F-statistic

45.39813

    Durbin-Watson stat

0.867793

Prob(F-statistic)

0.000000

DYNAMIC LINEAR SPECIFICATION

LOG Linear specification: log rcons c logryd

LOG LINEAR EQUATION: α + log βryd

Dependent Variable: LOG(RCONS)

Method: Least Squares

Date: 10/21/09 Time: 15:04

Sample: 1969 2003

Included observations: 35

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

C

2.039330

0.211973

9.620723

0.0000

LOG(RYD)

0.473350

0.055936

8.462289

0.0000

R-squared

0.684544

    Mean dependent var

3.814441

Adjusted R-squared

0.674984

    S.D. dependent var

0.316484

S.E. of regression

0.180428

    Akaike info criterion

-0.531526

Sum squared resid

1.074289

    Schwarz criterion

-0.442649

Log likelihood

11.30171

    Hannan-Quinn criter.

-0.500846

F-statistic

71.61033

    Durbin-Watson stat

0.965647

Prob(F-statistic)

0.000000

APPENDIX 4

Dynamic linear function

Dynamic linear specification: rcons c ryd rcons(-1)):

Dynamic linear equation:

Dependent Variable: RCONS

Method: Least Squares

Date: 10/21/09 Time: 15:34

Sample (adjusted): 1970 2003

Included observations: 34 after adjustments

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

C

4.506354

5.375165

0.838366

0.4082

RYD

0.202606

0.146306

1.384811

0.1760

RCONS(-1)

0.737391

0.179686

4.103786

0.0003

R-squared

0.699289

    Mean dependent var

48.39961

Adjusted R-squared

0.679888

    S.D. dependent var

14.92920

S.E. of regression

8.446706

    Akaike info criterion

7.189527

Sum squared resid

2211.752

    Schwarz criterion

7.324206

Log likelihood

-119.2220

    Hannan-Quinn criter.

7.235457

F-statistic

36.04452

    Durbin-Watson stat

1.424198

Prob(F-statistic)

0.000000

Dynamic log linear function:

Dynamic log linear specification: log(rcons) c log(ryd) log(rcons(-1))

Dynamic log linear equation:

Dependent Variable: LOG(RCONS)

Method: Least Squares

Date: 10/21/09 Time: 15:32

Sample (adjusted): 1970 2003

Included observations: 34 after adjustments

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

C

0.951086

0.380349

2.500559

0.0179

LOG(RYD)

0.162741

0.097055

1.676798

0.1036

LOG(RCONS(-1))

0.598390

0.164471

3.638278

0.0010

R-squared

0.729374

    Mean dependent var

3.837901

Adjusted R-squared

0.711914

    S.D. dependent var

0.288705

S.E. of regression

0.154958

    Akaike info criterion

-0.807224

Sum squared resid

0.744374

    Schwarz criterion

-0.672545

Log likelihood

16.72281

    Hannan-Quinn criter.

-0.761295

F-statistic

41.77463

    Durbin-Watson stat

1.417667

Prob(F-statistic)

0.000000

APPENDIX 5

RAMSEY RESET TEST

Dynamic linear function

Dynamic linear specification: rcons c ryd rcons (-1)):

Dynamic linear equation:

Ramsey RESET Test:

F-statistic

2.243951

    Prob. F(2,29)

0.1241

Log likelihood ratio

4.892206

    Prob. Chi-Square(2)

0.0866

Test Equation:

Dependent Variable: RCONS

Method: Least Squares

Date: 12/02/09 Time: 17:47

Sample: 1970 2003

Included observations: 34

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

C

4.043292

27.65016

0.146230

0.8848

RYD

0.682897

0.530752

1.286660

0.2084

RCONS(-1)

1.496027

1.891990

0.790716

0.4355

FITTED^2

-0.044123

0.052895

-0.834168

0.4110

FITTED^3

0.000384

0.000351

1.096192

0.2820

R-squared

0.739589

    Mean dependent var

48.39961

Adjusted R-squared

0.703670

    S.D. dependent var

14.92920

S.E. of regression

8.126887

    Akaike info criterion

7.163286

Sum squared resid

1915.343

    Schwarz criterion

7.387751

Log likelihood

-116.7759

    Hannan-Quinn criter.

7.239835

F-statistic

20.59061

    Durbin-Watson stat

1.752695

Prob(F-statistic)

0.000000

APPENDIX 6

BREUSCH- GODFERYSERIAL CORRELATION LM TEST

Dynamic linear function

Dynamic linear specification: rcons c ryd rcons (-1)):

Dynamic linear equation:

Breusch-Godfrey Serial Correlation LM Test:

F-statistic

3.260477

    Prob. F(2,29)

0.0528

Obs*R-squared

6.241736

    Prob. Chi-Square(2)

0.0441

Test Equation:

Dependent Variable: RESID

Method: Least Squares

Date: 12/02/09 Time: 17:49

Sample: 1970 2003

Included observations: 34

Presample missing value lagged residuals set to zero.

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

C

6.594438

6.711195

0.982603

0.3339

RYD

0.309798

0.287950

1.075873

0.2909

RCONS(-1)

-0.464851

0.391742

-1.186624

0.2450

RESID(-1)

0.668505

0.359984

1.857041

0.0735

RESID(-2)

-0.116130

0.288298

-0.402812

0.6900

R-squared

0.183580

    Mean dependent var

-2.72E-15

Adjusted R-squared

0.070971

    S.D. dependent var

8.186745

S.E. of regression

7.890888

    Akaike info criterion

7.104347

Sum squared resid

1805.717

    Schwarz criterion

7.328812

Log likelihood

-115.7739

    Hannan-Quinn criter.

7.180896

F-statistic

1.630238

    Durbin-Watson stat

2.130296

Prob(F-statistic)

0.193398

APPENDIX 7

HETEROSKADASTICITY TEST

Dynamic linear function

Dynamic linear specification: rcons c ryd rcons (-1)):

Dynamic linear equation:

Heteroskedasticity Test: White

F-statistic

3.352107

    Prob. F(5,28)

0.0169

Obs*R-squared

12.73126

    Prob. Chi-Square(5)

0.0260

Scaled explained SS

17.71086

    Prob. Chi-Square(5)

0.0033

Test Equation:

Dependent Variable: RESID^2

Method: Least Squares

Date: 12/02/09 Time: 17:52

Sample: 1970 2003

Included observations: 34

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

C

230.9927

246.2392

0.938083

0.3562

RYD

-6.765674

6.695113

-1.010539

0.3209

RYD^2

-0.206819

0.143325

-1.443002

0.1601

RYD*RCONS(-1)

0.654743

0.338315

1.935305

0.0631

RCONS(-1)

-6.113805

12.64696

-0.483421

0.6326

RCONS(-1)^2

-0.256126

0.170470

-1.502469

0.1442

R-squared

0.374449

    Mean dependent var

65.05153

Adjusted R-squared

0.262743

    S.D. dependent var

120.7970

S.E. of regression

103.7207

    Akaike info criterion

12.28007

Sum squared resid

301223.8

    Schwarz criterion

12.54942

Log likelihood

-202.7611

    Hannan-Quinn criter.

12.37193

F-statistic

3.352107

    Durbin-Watson stat

2.070800

Prob(F-statistic)

0.016946

APPENDIX 8

AUGMENTED DICKEY FULLER TEST IN FIRST DIFFERENCES

Null Hypothesis: D(RCONS) has a unit root

Exogenous: Constant

Lag Length: 0 (Automatic based on SIC, MAXLAG=0)

t-Statistic

  Prob.*

Augmented Dickey-Fuller test statistic

-4.714140

 0.0006

Test critical values:

1% level

-3.646342

5% level

-2.954021

10% level

-2.615817

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation

Dependent Variable: D(RCONS,2)

Method: Least Squares

Date: 12/14/09 Time: 13:25

Sample (adjusted): 1971 2003

Included observations: 33 after adjustments

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

D(RCONS(-1))

-0.845947

0.179449

-4.714140

0.0000

C

1.811955

1.544072

1.173491

0.2495

R-squared

0.417546

    Mean dependent var

0.140490

Adjusted R-squared

0.398758

    S.D. dependent var

11.13363

S.E. of regression

8.632997

    Akaike info criterion

7.207752

Sum squared resid

2310.388

    Schwarz criterion

7.298450

Log likelihood

-116.9279

    Hannan-Quinn criter.

7.238269

F-statistic

22.22312

    Durbin-Watson stat

1.899706

Prob(F-statistic)

0.000049

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)