Principles of Econometrics

March 2, 2026

Index

1 Introduction to Statistics and Econometrics

Econometrics is the use of statistical methods to analyze economi data, starting typically from non-experimental data.

Common goals of econometric analysis are:

  • estimating relationships between variables
  • testing economic theories and hypoteses
  • evaluating and implementing government and business policy
Structure of Econometric Data
  • Cross sectional data
    • observations that represent individuals, firms, cantons, countries (normally, but non necessarily, at one point in time).
    • observations are drawn randomly from a population (if not, we have a sample-selection problem)
  • Time Series Data
    • observations represents periods in time
    • observations are consecutive and hence not random
  • Pool Cross-Sections
    • at least two cross sections are combined in one data sectional
    • cross sections are drawn independently of each other
    • can often be treaded similar to a normal cross section
  • Panel (or longitudinal data)
    • cross-sectional units followed over time
    • panel data have a cross-sectional and time series dimension
    • useful to account for time-invariant unbobservables and for model lagged responses

Causality

The definition of causal effect of on is: "how does variable if variable is changed, but all other factors are held costants?
(This concept is called Ceteris Paribus).

Simply establishing a relationship (correlation) netweem variables can be misleading (important distincition between correlation and causation!)

There are multiple types of experiments that allow answering causal questions:

  • Randomized controlled trials (RCT)
  • Natural experiments



Probability Review: Distributions
  • Chi-Square Distribution

Let be independented random variables with , then

has a chi-square distribution with degrees of freedom and we write $X \sim \chi^2_n.

  • t-Distribution

Let and , then

has a t-distribution with degrees of freedom and we write .

  • F-Distribution

Let and and assume independent, then:

has a F-distribution with degrees of freedom and we write .

Central Limit Theorem

The standardized average of any population with mean and variance is asymptotically distributed, or, in other words:

Law of Large Numbers

Let be independent, identically distributed random variables with mean , then:


2 Simple Regression Analysis

How can we use the data to describe economic reltions or behaviors?

  • : dependent variable (outcome variable)
  • : independent variable (regressor, covariate, control variable, "the cause")
  • : error term (disturbance)
    • it represents strictly unpredictable random behaviors, unspecified or unobserved factors or an approximation error if the relation is not perfectly linear.
  • : intercept parameter
  • : slope parameter

We start now with Population Modelling and consider the following assumptions:

  • SLR.1: Linear parameters
    • in the population model the following relation holds:
  • SLR.2: Random sampling
    • we have a ranbdom sample of dize of random variables
  • SLR.3: Sample variation in the explanatory variable
    • the sample outcome on are not all the same value
  • SLR.4: Zero Conditional Mean

or, in other words, for every slices of population determined by , the average of i eaul to the population average (which is zero).
This also implies:

also called the Populatio Regression Function.

Population Model

We now derive the Ordinary Least Squares (OLS).

Intuition: we estimate the population parameters from a data sample of two random variables.

  • Let denonte a random sample of size
  • scatter plot in a system
  • the regression equation allocates to each a value including a disturbance term .

But _how can we derive the estimates for

We firstly rely on the SLR.4 assumption to derive two equations:

These are called populatio moment restrictions

Using then the sample moments from the population, we can get an estimates of the population moments.

Recall:

we retwrite ) as: , then we then plug into and solve for :

Finally, we can easily put back into the first equation and derive .

Intuition: the slope estimate is the sample covariance and , divided by the sample variance .

  • if are positively correlated, the slope will be positive.
  • if are negatively correlated, the slope will be negative.

Moreover, intuitively, the OLS is fitting a line through the sample points such that the sum of squared residuals is as small as possible.

Algebraic Properties of OLS:

  • the sum of of the OLS residuals is zero.
  • the sample average of the OLS residuals is zero.
  • the sample covariance between the regressors and the OLS residuals is zero.
  • the OLS regression line alwyas goes through the mean of the sample.

Moreover, we notice that each observation is made up of an explained and unexplained part ().

Using this terminology, we can define:

To understand how well does the sample fit the regression line, we define:


Be aware that, the term linear in a OLS model does not mean a linear relationship between the variables, but a model in which the parameters enter the model in a linear way.
The following are all linear models (in the parameters):




Implication of the Simple Linear Regression
  1. OLS is unbiased (the proof depend on the 4 assumptions, if any fails, the OLS is not necessarily unbiased)
  2. The sampling distibution of our estimates is centered around the true parameter

But, following the second implication, how likely is it that the true slope is slightly larger, smaller or zero?

This question can be translated into another assumption:

  • SLR.5 Homoskedasticity: assume

Using then the Homoskedasticity assumption, we can derive the variance of .

Hence, is also the unconditional variance, called the error variance.

We can then use this formula to find the variance of :

  • the larger the error variance $\sigma^2£, the larger the variance of the slope estimate.
  • the large the variability in the (), the smaller the variance of the slope estimate.

Finally, starting from the residuals , we can form an unbiased estimate of the error variance (often called the mean square error (MSE)), denoted by :

Intuition: is the truth; is our best guess based on the sample we have.

Moreover, we divide by because, estimating , we lost 2 degrees of freedom.


3 Multiple Regression Analysis: Basics

The key problem with simple linear regression is that the assumption is often problematic.

Consider, for example, that the true population model is:

The previous assumption states that the error term has (zero) expected value (no trend) or, in other words, the \text does not correlate with the .

This causes to be biased compared to the "true parameter", so it doesn't measure the causal effect of education to wage.

Thus, instead of assuming that multiple variables are uncorrelated with the output variable, the multiple regression model allows to include them directly in the model.

  • is still the intercept
  • are the slope parameters
  • is the error term.

We still need a zero conditional mean: , that means, in other words, that all the factors that influence the outcome are included in the model.


In order to estimate the parameters, we still use the Ordinary Least Squares method and we minimize the residuals:

leading to conditions to derive parameters.

The estimate model alloqs a ceteris paribus interpretation, that means that a change in , , leads to a change in given by , keeping all the other fixed.


Frisch-Waugh-Lovell (FWL) Theorem

Given a multiple (dual for example) regression model y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \mu\beta_1$ can be found in 2 steps:

  1. Regress on :

Save then the residuals . This represents the part of not correlated with .

  1. Regress on the residuals:

The resulting coefficient will be identical of the original model.


The previous definition of is still valid, but we can add the following remarks:

  • is the squared correlation coefficient between and the predicted .
  • never decreases when adding independent variables to a regression (usually will increase).
  • Because it increases changing the number of parameters, it NOT a good measure to compare different models.

As in the simple regression model, we can formalize the assumptions for the multiple one:

  • MLR.1: Linear in parameters
    in the population model the following relation holds:

  • MLR.2: Random sampling
    we have a random sample of size :

  • MLR.3: No perfect collinearity (sample variation in explanatory variables)
    no explanatory variable is an exact linear combination of the others, and each regressor has variation in the sample.

  • MLR.4: Zero Conditional Mean

    or, for every slice of population determined by , the average of is zero.

    This also implies:

    also called the Population Regression Function.

Then, we can derive the following:

Implication 1: Unbiasedness of OLS

Under the previous assumptions, the OSL estimator is unbiased: .

  • If we include irrelevant variables in our model, the OLS estimator remains still unbiased.
  • If we exclude relevant variables, OLS will usually be biased.

Let's suppose the true model is:

but our model is:

and we actually estimate:

The slope parameter will then be:

Recall that the numerator is:

Since , it implies that:

Note that the bart after is the slope from the regression of on :

so we have:

Corr() > 0, > 0Corr() < 0, < 0
Positive biasNegative bias
Negative biasPositive bias

Then we can consider two corner cases:

  • : doesn't affect :
  • , so uncorrelated in the sample.



Assumption 2: Efficiency of OLS Estimator

Once we know that the estimate is centered around the true parameter, we want to understand how it is distributed.

If we add a fifth assumption (MLR.5 Heteroskedasticity), we know that:

we also know/derive that:




Theorem: Sampling Variances of the OLS Slope Estimators

Given the assumptions from MLR.1 to MLR.5,

where and is the from the regression from all other .

  • a larger leads to larger variance in the estimarors.
  • a smaller implies a smaller variance in the estimators.
  • a larger (linear dependence between the variables) implies larger variance in the estimators.
    • this is called multicollinearity problem.

Let's analyze now what happens if we mis-specify the model. Consider again the true and the mis-specified models:

In this case the estimated variance equals .

On the other hand, the variance using the true model equals:

So, unless are uncorrelated:

Intuition:

  • the variance of the estimaor is smaller in the mis-specified model_.
  • the mis-specified model is biased
  • As the sample size grows, the variance of each estimator shrinks to zero, making the variance difference less important

Estimating the error
  • Estimate of the errorr variance

where represents the degrees of freedom.

  • Standard deviation of :
  • Standard error of :



Theorem: Unbiased Estimation of ::

Under the assumptions MLR.1 to MLR.5, the estimator is unbiased, .




Theorem: Gauss-Markov Theorem

Under the assumptions MLR.1 to MLR.5, the OLS estimators are the best linear unbiased estimators (BLUEs) of :

  • Best: estimators have the lowest possible variance
  • Linear: estimators are a linear function of
  • Unbiased: expected value equala the population paramters.

4 Multiple Regression Analysis: Inference

So far, we've seen that, given the MLR 1-5, the OLS is BLUE (mosy precise, most accurate).

To do hypotesis testig, we add another assumption.

MLR. 6: Normality (Classica Linear Model [CLS] Assumption): the disturbance is independent of and it is normally distributed with zero mean and variance : .

Under this assumption, conditional on the sample values of the independent variables we obtain:

(the coefficient estimate is normally distributed around the true beta)

This implies that:

(the standardized average deviation from the true value is standardized normal)

Furthermore, if we use the estimate of the variance of the disturbance (), under the CLM assumptions we obtain:

where and is the degree of freedom.

This result will be useuf to determine how likely an estimate is similar to .

Both the -Student and the Normal distribution are symmetric bell-shaped but the -Student:

  • has fatter tails than the normal
  • converges to the normal for an infinite sample
  • is conditional to the degree of freedom
  • can be approximated with a normal distribution where the degree of freedom or similar.

Population Model

Test Hypotesis

Before starting with the test, let's take a look to the different errors:

  • Type I Error: we reject the null hypotesis when it is true (false positive).
  • Type II Error: we don't reject the null hypotesis when it is false (false negative).
Test
  1. Set up the hypotesis. can be one-sided () or two sided ().
  2. Determine the -statistic using the estimates for $b_j, se(b_j).

For example, as seen before:

  1. Select a significant level, or, in other terms, the chane to make Type I error and determine the critical value, depending if it is one or two sided hypotesis.

  2. Decide: reject if ansolute value of the -statistic is larger than the critical value.

For example, let's consider the following model:

  1. We set up the hypotesis: (: education doesn't increase employment)
  2. We calculate
  3. We consider using (confidence level) = n - k - 1$
  4. We reject if (if use )
  • If we reject the null hypotesis, we typically say: _ is statistically significant/has statistically significant effect on at the level.

Graphically:

Population Model

Population Model

More generally, we can test wheter an estimate fits a specific value: .

In this case, we use the appropriate -statistic: , where for the standard test.


Confidence Intervals

Another way to use. stastistical testing is to construct confidence intervals using the same critical value for a two-sided test.

A confidence interval is defined as where is the (1 - \alpha / 2)t_$ distribution.

P-Values for -Tests

An alternative approach is to calculate what is the smallest significance level at which the null hypotesis would be rejected given the data.

So we compute the -statistic and we look up at which percentile it is in the appropriate -distribution. This is called the p-value.

The p-value represnets the probability that we would observe the -statistic we did, if the null were true.

Testing ore complex hypotheses - Linear Combination

Suppose we want to test if is equal to another parameter, that is: . Then the statistic test is .

But, if we expand the formula:

So we need $s_, which we don't usually have.

To avoid this, we can use the following "trick".

We set . To do that, we also have to substitute in our model.

So, for example, we can consider:

Multiple Linear Restrictions

So far we tested a single linear restriction (). Now we want to jointly test multiply hypotesis about the parameters.

A typical example is testing "exclusion restrictions", that means knowing if a group of parameters are equal to zero.

Then the null hypotesis might be something like: , but we cannot check each statistic separately because we want to know fi the parameters are jointly significant each other.
We instead need to estimate the restricted model without all the as the un-restricted model with all the included.

Intuition: we want to know if the change in SSR is bigh enough.

This is called -statistic, defined as:

where is the restricted, the un-restricted. Note that the statistic is always positive.

_Intuition: the statistic is measuring the relative increase in the SSR when moving from the unrestricted to the restricted model. -.

To decide if the increases in SSR is "big enough", to reject the exclusions, we compare if with the distribution, indeed we know that: $F \sim F_ where:

  • is the numerator degrees of freedom
  • is the denominator degrees od freedom.

f testl

OLS asymptotics

Imder the Gauss-Markov assumptions the OLS is BLUE, however they are not always fullfilled with real data. Large samples (big data!) come to our rescue! It can be shown that some nice properties keep intact if . (Larger samples allow to relax some assumptions).

  • Consistency: if estimators are consistent, that means that the distribution of the estimator collapses to the parameter value.
    This implies that when we can use MLR.4 (zero mean and zero correlation).

Just as we derived the omitted variable bias earlier, we can think about the inconsistency (asymptotic bias).

Consider the true model and the estimated model: , so that .
In this case, , where , which tells us how much our estimate () deviates from the true parameter ().

Intuition: inconsistecy is a large sample problem: it does not go away as we add data.

Large Sample Inference

So far, we relied on assumption about normal distribution of errors, but this assumption can often break down! Again, large samples come to the rescue: as , the central limit theorem shows that OLS estimates are asymptotically normal.

Thus, we no longer need to assume normality with a large sample, we get it anyway.

If is not normally distributed, we sometimes will refr to the standard error as the asymptotic stanfard error. In general, we can expect stanfard errors to shrink at a rate proportional to the inverse of .

Asymptotic Efficiency

Thereare other estimators besided OLS that are consistent. However, under the Gauss-Markov assumptions, the OLS estimators will have the smallest asymptotic variances, therefore we say that the OLS is asymptotically efficient.


5 Multiple Regression Analysis: Further Issues

Review

To test hypotheses about estimates, we previously relied on assumption about normal distribution of errors (MLR.6) in order to redive and distributions.

This implied that the distribution of given was normal as well.

However, this assumption about normality can often break down! (example: amy clearly skewed variable, like wages, arrests, savings etc... cannot be normal, since normal distributions are symmetric).

Also in this case, large samples are the solution: if the Cental Limit Theorem shows that OLS estimates are asymptotically normal-.

In other terms, for any population with mean and standard deviation , the sampling distribution of the sample mean is approx normal with mean and standard deviation .

Secondly, the -distributoin approaches a normal distribution for a large (degree of freedom), so we no longer need to assume normality with a large sample.

As we said, if the error is NOT normally distributed, we sometimes will refer to it as the asymptotic standard error. (We can expect standard errors to shrink at rate proportional to the inverse of ).

Further Issues in Multiple Regression Analysis: Scaling Variables

Changing the scale of the variable will lead to a change in the scale of the coefficients and the standard error, without a meaningfull change in significante/interpretation. The same applies for a change in .

Occasionally, we will see references to standardized coefficient, calculate usig the standardized version of , so coefficients reflect a standard deviation change of .

Functional Form

OLS can also be used for relationship that are not strictly linear in by using non-linear functions of (as long the model is linear in the parameters).

ModelEquationInterpretation
Level-level
Level-log
Log-level
Log-log

Log Models
  • they are invariant to the sale of the variables since it's all about percentage changes
  • they give a direct estimate of the elasticity
  • for models with , the conditional distribution is often Heteroskedastic or skewed, whil is much less.
  • the distribution of is narrower, limiting the effect of outliers.

Note that, when using the log form, variables have to be positive!

Quadratic Models

For a model like we know that:

  • if is positive and is negative, is increasing in at first and then decreasing.
  • if is negative and is positive the opposite happens.
  • The turning point is calculated setting the derivative to zero and lies at:


AdjustedR-Squared

Recall that always icrease as more variables are added to the model since never decreases with more variables:

The adjusted takes then into account the number of variables in the model:

The adjusted penalies models with more variables, especially for low and high and increases only when additional variables are added whose -statistic is larger than 1. adding variables with poor explanatory opower decreases adjusted .

In other terms, we usually use adjusted to compare models with the same , but never with models with different .

(Standard Errors for) Predictions

Suppose we want to use our estimates to obtain a specific prediction, in this case we want to estimate:

If we want to have an idea of the precision of the estimate (easily done by substituting values) we can:

  • rewrite
  • substitute in to obtain: $y = \theta_0 + \beta_1 (x_1 - c_1) + ... + \beta_k (x_k - c_k) + \mu
  • regress on and the intercept will gie the predicted value and its standard error the minum standard error is obtained when the equal the means of the .


Regression with Dummy Variables

A dummy (binary) variable is variable that takes on the value of o .

Consider a simple model with one continuous variable and a dummy :

  • if female, otherwise
  • : education in years
  • : wage

This can be interprete as an incercept shift:

  • if (male, BASE GROUP)
  • if (female)

Population Model


Dummies for Multiple Categories

Any categorial variable can be turned into a set of dummy variables. Because the base group is represented by the intercept, if there are categories there should be dummy variables.

Note that we can model interaction between dummies to divide in subgroups and between dummies with continuous variables to model a change in slope.

Dummy Vars

Testing for Differences Acorss Groups

Testing wheter a regression function is different for one group versun one another can be thought as of testing for the joint significance of the dummy and its interaction with all other variables.

So we can estimate the model with and without the interactions and form a -statistic (very tedious in practise!).

The Chow Test

We can compute the -statistic without running the unrestricted model with all interactions with continuous variables, but intead we can:

  • run the restricted model for group 1 _(using observations ) and get
  • run the restricted model for group 2 _(using observations ) and get
  • run the restricted model for all _(using ) and get
  • compute the -statistic as:
Dummy as Dependent Value: Linear Probability Model

when is binary model, so we can write our model as:

  • represents the change in the probability when changes by 1 around the mean.
  • the predicted is the probability

However, potential problems arise since the prediction can be outside .
Also, this model will violate the assumption of homoskedasticity, which will affect inferece.

Despite everything, OLS is usually a good starting point when is binary.

6 Heteroskedasticity and Other Problems

Assumptions Multiple Linear Regressin (MLR) Model
  • MLR.1: Linear in parameters
    • In the population model, the following relationship holds:
  • MLR.2: Random sampling
    • We have a random sample of size :
  • MLR.3: No perfect collinearity
    • None of the independent variables is constant, and there are no exact linear relationships among regressors.
  • MLR.4: Zero conditional mean
    • The error has zero conditional mean:
  • This implies .

  • MLR.5: Homoskedasticity
    • Assume constant conditional variance of the error:
  • MLR.6: Normality
    • The disturbance is independent of and normally distributed with zero mean and variance :

Recall that the assumption of homoskedasticity implied that, conditional on the explanatory variables, the variance of the unobserved error was costant.
If this is not true, then the variance is different for different values of and errors are said to be heteroskedastic.

Population Model

OLS is still unbiased and consistent, even if we do not assume homoskedasticity.

However, the standard errors of the estimates are biased if we have heteroskedasticity.

  • If the standard errors are biased, we cannot do inferece based on the usual , , -statistic. (The -statisitc is , and is obtained regressing on all variables. The LM statistic has a -distributio.)



Variance with Heteroskedasticity

For the simple bivariate case, heteroskedasticity implies that:

OLS slope decomposition:

Conditional variance of under heteroskedasticity:

Note that this differs from the homoskedastic case

A valid (consistent) estimator of the variance of when is:

where are the OLS residuals.

Note that this is different from the homoskedastic estimator:



Robust Standard Errors

Consider the model:

With heteroskedasticity, a valid (consistent) estimator of is:

where:

  • is the -th residual from regressing on all other independent variables.
  • is the sum of squared residuals from this auxiliary regression.
  • are the OLS residuals from the original model.

So, the corresponding robust standard error is:

Sometimes a finite-sample correction is used:

As , this correction becomes negligible.

Note that robust standard errors are justified asymptotically. In small samples, -statistics based on robust SE may not be close to a distribution.



Testing for Heteroskedasticity

We want to test:

If we assume the relationship betwee and to be linear, we can test it as a linea restriction:

The Breush-Pagan Test

In this test, we do not observe the error, but we can estimate it with the residuals from the OLS regrssion.

After regressing the residuals squared on all , we can use the to form a or -test.

  • The -statistic is distributed as and it is equal to:
  • THe -statistic follows a -distribution and is:

_Note that the Bresuch-Pagan test detect any linear forms of heteroskedasticity.



The White Test

The White Test allows for non-linearities by using squares and cross-products of all .

Consider thenm the fitted values from OLS are a function of all . Therefore will be a function of the squares and cross-produts, so we can:

  • regress the residuals squared on
  • use the to form a or -statistic

Weighted Least Squares

While it is always possible to estimate robust standard errors, if we know something about the specific form of heteroskedasticity we can obtain a more efficient estimates.

Intuition: transforming the model into one that has homoskedastic errors if we do it, we call the estimators weighted least squares.

Suppose the original model is:

and heteroskedasticity has the form:

We can define the variable:

Because is a function of , conditional on it is a constant and has costant variance.

Hence the transformed error is homoskedastic.

We now trasform the whole equation model by dividing the original model by by :

We can then define the transformed variables as:

so we obtain:

So OLS on this transformed model is BLUE (under the usual assumptions).

But why this is called “weighted” least squares?

OLS on transformed data minimizes:

Therefore WLS minimizes weighted residual squares with weights

Intuition:

  • If is large, observation has high error variance lower weight.
  • If is small, observation has low error variance higher weight.



Feasible GLS (FGLS) for Unknown Heteroskedasticity

In reality, we often do NOT know the exact form of heteroskedasticity (), so we need to estimate it.

We generally assume a flexible variance model:

So, under this assumption, we have:

Note that exponential form guarantees , so variance cannot be negative.

From this assumption:

if we assume independent of and

If we take the logarithms we obtain:

Since is unobserved, use OLS residuals from the original regression and estimate:

Let fitted values from this auxiliary regression be , then:

with weights equals .



Specification and Data Issues

We have seen that a linear regression can really fit nonlinear relationship, but how do we know if we have the right functional form for our model?
Firstly, economic theory shoud guide you, but a test of functional form can be useful.

Ramsey's RESET (Regression Specification Error Test)

RESET test lies on a trick similarto the special form of the White Test.

Instead of adding functions of the directly, we add and test functions on :

  • estimate
  • test:
  • a significant -test suggests that the model is not correctly specified _(using or )
Proxy Variables

What if a model is mis-specified because no data is available on a important variable?

It's possible to avoid omitted variable bias using a proxy variable. A proxy variable must be related to the unbobservable variable.

Consider for example: and suppose we do NOT have data about but we have data on other inputs, so we can write:

where implies unobserved. Then we can just substitute for .

However, to have consistent estimates of , we need:

or, in other terms, we need uncorrelated with and uncorrelated with .

Only in this case we can run the regression:

Without the previous assumption, we could end up with biased estimates:

If we consider for example correlated with :

them we end up in the regression:

_Intuition: the bias depends on the signs of , but it is still smaller than the omitted variable bias.

Measurement Error in a dependent variable

Consider the following situation. We would like to esitimate: , but we only measure the true value plus an error:

We then define the measurement error .

Therefore, we really estimate:

  • if and are uncorrelated, then the estimate is unbiased
  • if , then the estimate of is biased



Measurement Error in an Explanatory Variable

We want to estimate: , so we define the measurement error as and we assume:

Therefore, we really estimate:

The effect of the measurement error depends on assumptions abou the correlation betweeen :

  • if : OLS remains unbiased but we get higher variaces (similar to the proxy variable case)
  • if _(case known as the Classical errors-in-variavles assumption) are correlated with:

This implies that is correlated with the error so the estimate is biased:

_Note that the multiplicative error is so the estimate is biased toward zero (attenuatoin bias).



Nonrandom Samples

If the sample is chosen on the basis of an variable, then estimates are unbiased.

If the sample is chosen on the basis of the variable, them we sample selection bias.

Note that sample selection can be very subtle! For example, looking at wages for workers (people choose to work for this wage) is different that looking for the wage offers.

Outliers

Sometimes an individual observation can be very different from the others and can have large effects on the outcome. Outliers are often caused by errors in data entry (reason why looking at data summary statistic is very important!).

Multiple strategies to deal with outliers:

  • fix observation where it is clea there was just an extra zero or similar
  • drop outlier observations and show regressions with/without them
  • winsorize estreme observations (for istance: observations below set to and above set to ).
More on winsorization (Wikipedia)

7 Time Series Data


8 Panel Data


9 Instrumental Variables



Thanks for reading.

If you enjoy this article, please share it with a friend.
If you didn’t… well, share it anyway — maybe they have better taste.

Giacomo