Friday, April 10, 2009


Google



Friday, April 06, 2007

How to Lie with Statistics

  • How to Lie with Statistics
  • Confidence intervals for the predicted values - logistic regression

    Using predict
    after logistic
    to get predicted probabilities and confidence intervals is somewhat tricky. The
    following two commands will give you predicted probabilities:


            . logistic ...
    . predict phat


    The following does not give you the standard error of the predicted
    probabilities:


            . logistic ...
    . predict se_phat, stdp


    Despite the name we chose, se_phat does not contain the
    standard error of phat. What does it contain? The standard error
    of the predicted index. The index is the linear combination of the estimated
    coefficients and the values of the independent variable for each observation
    in the dataset. Suppose we fit the following logistic
    regression model:


            . logistic y x 


    This model estimates b0 and b1 of the following model:


    P(y = 1) = exp(b0+b1*x)/(1 + exp 0+b1*x))
    Here the index is b0 + b1*x. We could get
    predicted values of the index and its standard error as follows:

            . logistic y x
    . predict lr_index, xb
    . predict se_index, stdp


    We could transform our predicted value of the index into a predicted
    probability as follows:


    . gen p_hat = exp(lr_index)/(1+exp(lr_index))


    This is just what predict does by default after a logistic regression
    if no options are specified. Using a similar procedure, we can get a 95%
    confidence interval for our predicted probabilities by first generating the
    lower and upper bounds of a 95% confidence interval for the index and then
    converting these to probabilities:



    . gen lb = lr_index - invnorm(0.975)*se_index
    . gen ub = lr_index + invnorm(0.975)*se_index
    . gen plb = exp(lb)/(1+exp(lb))
    . gen pub = exp(ub)/(1+exp(ub))


    Generating the confidence intervals for the index and then
    converting them to probabilities to get confidence intervals for the predicted
    probabilities is better than estimating the standard error of the predicted
    probabilities and then generating the confidence intervals directly from that
    standard error. The distribution of the predicted index is
    closer to normality than the predicted probability.

  • Confidence intervals for the predicted values - logistic regression-stata
  • Wednesday, March 28, 2007

    Hosmer and Lemeshow Test

    Hosmer-Lemeshow
    test of goodness-of-fit can be performed by using the lackfit option after the model statement. This test divides subjects into deciles based on predicted probabilities, then computes a chi-square from observed and expected frequencies.
    It tests the null hypothesis that there is no difference between the observed and predicted values of the response variable.Therefore, when the test is not significant, as in this example, we can not reject the null hypothesis and say that the model fits the data well. We can also request the generalized R-square measure for the model by
    using rsquare option after the model statement. SAS gives the likelihood-based
    pseudo R-square measure and its rescaled measure.

    Categorical Data Analysis Using The SAS System
    , by M. Stokes, C. Davis
    and G. Koch offers more details on how the generalized R-square measures that
    you can request are
    constructed and how to interpret them.
    proc logistic
    data = hsb2;
    class prog(ref='1') /param = ref;
    model hiwrite(event='1') = female prog read math / rsq lackfit;
    run;

    Thursday, March 22, 2007

    Cumulation of the data

    %let name=200311+200312+200401+200402+200403+200404;

    %let first_d=1031101+1031201+1040101+1040201+1040301+1040401;

    %let last_d=1031201+1040101+1040201+1040301+1040401+1040501;
    options mprint mlogic;

    %macro peulot;

    data nohesh.netunim_200311_200404;delete ;run;

    %do i=1 %to 6;

    proc sql;

    connect to teradata

    (user=xxx password=123 tdpid=DWPROD);

    create table nohesh.peulot_%scan(&name,&i,+) as

    select * from connection to teradata

    (select branch_cust_ip,

    count(*) as peulot

    from bo_vall.V0500_1_FINANCIAL_EVENT as a,

    bo_vall.VBM845_FINANCIAL_EVENT_CUST as b

    where event_start_date ge %scan(&first_d,&i,+)

    and event_start_date lt %scan(&last_d,&i,+)

    and a.event_id=b.event_id

    group by 1

    );

    disconnect from teradata;

    quit;

    data nohesh.netunim_200311_200404;

    set nohesh.netunim_200311_200404

    nohesh.peulot_%scan(&name,&i,+);

    run;

    %end;

    %mend;

    %peulot;

    Jarque-Bera hypothesis test of normality

    Function JBTest(ReturnVector, SignificanceLevel)

    Jarque-Bera hypothesis

    test of normality:



    ' Andreas Steiner, March 2006
    ' http://www.andreassteiner.net/performanceanalysis

    n = WorksheetFunction.Max(ReturnVector.Columns.Count, ReturnVector.Rows.Count)

    ReturnVectorMean = WorksheetFunction.Average(ReturnVector)
    ReturnVectorStDev = WorksheetFunction.StDev(ReturnVector)

    ' Normalize returns
    ReDim NormalizedReturns(1 To n)
    For i = 1 To n
    NormalizedReturns(i) = (ReturnVector(i) - ReturnVectorMean) / ReturnVectorStDev
    Next i

    ' Calculate 3rd and 4th moments (skewness and kurtosis)
    S = 0
    K = 0
    For i = 1 To n
    S = S + NormalizedReturns(i) ^ 3
    K = K + NormalizedReturns(i) ^ 4
    Next i
    S = S / n
    K = K / n - 3

    JB = n * ((S ^ 2) / 6 + (K ^ 2) / 24)
    pValue = WorksheetFunction.ChiDist(JB, 2)

    JBTest = (SignificanceLevel < pValue)

    End Function
    Function JBCriticalValue(ReturnVector, SignificanceLevel)
    ' Jarque-Bera hypothesis test of normality.
    '
    ' Andreas Steiner, March 2006
    ' http://www.andreassteiner.net/performanceanalysis

    JBCriticalValue = WorksheetFunction.ChiInv(SignificanceLevel, 2)

    End Function
    Function JBpValue(ReturnVector, SignificanceLevel)
    ' Jarque-Bera hypothesis test of normality.
    '
    ' Andreas Steiner, March 2006
    ' http://www.andreassteiner.net/performanceanalysis

    n = WorksheetFunction.Max(ReturnVector.Columns.Count, ReturnVector.Rows.Count)

    ReturnVectorMean = WorksheetFunction.Average(ReturnVector)
    ReturnVectorStDev = WorksheetFunction.StDev(ReturnVector)

    ' Normalize returns
    ReDim NormalizedReturns(1 To n)
    For i = 1 To n
    NormalizedReturns(i) = (ReturnVector(i) - ReturnVectorMean) / ReturnVectorStDev
    Next i

    ' Calculate 3rd and 4th moments (skewness and kurtosis)
    S = 0
    K = 0
    For i = 1 To n
    S = S + NormalizedReturns(i) ^ 3
    K = K + NormalizedReturns(i) ^ 4
    Next i
    S = S / n
    K = K / n - 3

    JB = n * ((S ^ 2) / 6 + (K ^ 2) / 24)
    JBpValue = WorksheetFunction.ChiDist(JB, 2)

    End Function
    Function JBStat(ReturnVector, SignificanceLevel)
    ' Jarque-Bera hypothesis test of normality.
    '
    ' Andreas Steiner, March 2006
    ' http://www.andreassteiner.net/performanceanalysis

    n = WorksheetFunction.Max(ReturnVector.Columns.Count, ReturnVector.Rows.Count)

    ReturnVectorMean = WorksheetFunction.Average(ReturnVector)
    ReturnVectorStDev = WorksheetFunction.StDev(ReturnVector)

    ' Normalize returns
    ReDim NormalizedReturns(1 To n)
    For i = 1 To n
    NormalizedReturns(i) = (ReturnVector(i) - ReturnVectorMean) / ReturnVectorStDev
    Next i

    ' Calculate 3rd and 4th moments (skewness and kurtosis)
    S = 0
    K = 0
    For i = 1 To n
    S = S + NormalizedReturns(i) ^ 3
    K = K + NormalizedReturns(i) ^ 4
    Next i
    S = S / n
    K = K / n - 3

    JBStat = n * ((S ^ 2) / 6 + (K ^ 2) / 24)

    End Function
    EXCEL FUNCTION

    Descriptive statistics

    Measures of Skewness and Kurtosis



    Skewness is a measure of symmetry, or more precisely, the lack of
    symmetry. A distribution, or data set, is symmetric if it looks the
    same to the left and right of the center point.

    Thursday, February 08, 2007

    Tuesday, February 06, 2007

    SAS/Excel Tricks

    ods noresults;
    ods listing close;
    ods html body="c:\temp\classods.xls";
    proc print data=sashelp.class(obs=10);
    run;
    ods html close;
    ods html body="c:\temp\shoesods.xls";
    proc print data=sashelp.shoes(obs=10);
    run;
    ods html close;
    ods html body="c:\temp\zipcodeods.xls";
    proc print data=sashelp.zipcode(obs=10);
    run;
    ods html close;
    ods listing;
    ods results;



    Macro to Combine Worksheets:



    %macro many2one(in=,out=);
    options noxwait;
    x erase "&out";
    options xwait;

    data _null_;
    file "c:\temp\class.vbs";
    put 'Set XL = CreateObject("Excel.Application")' /
    'XL.Visible=True';
    %let n=1;
    %let from=%scan(&in,&n," ");
    %do %while("&from" ne "");
    %let fromwb=%scan(&from,1,"!");
    %let fromws=%scan(&from,2,"!");
    put "XL.Workbooks.Open ""&fromwb""";
    %if &n=1 %then
    put "XL.ActiveWorkbook.SaveAs ""&out"", -4143"%str(;);
    %else %do;
    put "XL.Workbooks(""%scan(&fromwb,-1,'\')"").Sheets(""&fromws"").Copy ,XL.Workbooks(""%scan(&out,-1,'\')"").Sheets(%eval(&n-1))";
    put "XL.Workbooks(""%scan(&fromwb,-1,'\')"").Close";
    %end;
    %let n=%eval(&n+1);
    %let from=%scan(&in,&n, " ");
    %end;
    put "XL.Workbooks(""%scan(&out,-1,'\')"").sheets(1).activate";
    put "XL.Workbooks(""%scan(&out,-1,'\')"").Save";
    put "XL.Quit";
    run;

    x 'c:\temp\class.vbs';
    %mend;

    Example:

    %many2one(in=c:\temp\classods.xls!classods
    c:\temp\shoesods.xls!shoesods c:\temp\zipcodeods.xls!zipcodeods,
    out=c:\temp\combined.xls);

    sas-excel-tricks

    Stratified Random Sampling

    Stratified Random Sampling, also sometimes called proportional or quota random sampling, involves dividing your population into homogeneous subgroups and then taking a simple random sample in each subgroup. In more formal terms:



    Objective: Divide the population into non-overlapping groups (i.e., strata)
    N1, N2, N3, ... Ni,
    such that N1 + N2 + N3 + ... + Ni = N.
    Then do a simple random sample of
    f = n/N in each strata.




    There are several major reasons why you might prefer stratified sampling over simple random sampling. First, it assures that you will be able to represent not only the overall population, but also key subgroups of the population, especially small minority groups.If the subgroup is extremely small, you can use different
    sampling fractions (f) within the different strata to randomly over-sample the small group although you'll then have to weight the within-group estimates using the sampling fraction whenever you want overall population estimates). When we use the same sampling raction within strata we are conducting proportionate stratified random sampling.
    When we use different sampling fractions in the strata, we call this disproportionate stratified random sampling. Second, stratified random sampling will generally have more statistical precision than simple random sampling. This will only be true if the strata or groups are homogeneous. If they are, we expect that the variability within-groups is lower than the variability for the population as a whole. Stratified sampling capitalizes on that fact.

    Probability Sampling



    A probability sampling method is any method of sampling that utilizes some form of random selection. In order to have a random selection method, you must set up some process or procedure that assures that the different units in your population have equal probabilities of being chosen.



    Some Definitions :


    N = the number of cases in the sampling frame
    n = the number of cases in the sample
    NCn = the number of combinations (subsets) of n from N
    f = n/N = the sampling fraction

    Many computer programs can generate a series of random numbers.
    After that you have rearrange the list in random order from the lowest to the highest random number. Then, all you have to do is take the first hundred names in this sorted list.Simple random sampling is not the most statistically efficient method of sampling and you may, just because of the luck of the draw, not get good representation of subgroups in a population