This post copy from Stata Blog. Posted on today by William Gould

We just announced the release of Stata 16. It is now available. Click to visit stata.com/new-in-stata.

Stata 16 is a big release, which our releases usually are. This one is broader than usual. It ranges from lasso to Python and from multiple datasets in memory to multiple chains in Bayesian analysis.

The highlights are listed below. If you click on a highlight, we will spirit you away to our website, where we will describe the feature in a dry but information-dense way. Or you can scroll down and read my comments, which I hope are more entertaining even if they are less informative.

The big features of Stata 16 are

- Lasso, both for prediction and for inference
- Reproducible and automatically updating reports
- New meta-analysis suite
- Revamped and expanded choice modeling (margins works everywhere)
- Integration of Python with Stata
- Bayesian predictions, multiple chains, and more
- Extended regression models (ERMs) for panel data
- Importing of SAS and SPSS datasets
- Flexible nonparametric series regression
- Multiple datasets in memory, meaning frames
- Sample-size analysis for confidence intervals
- Nonlinear DSGE models
- Multiple-group IRT
- Panel-data Heckman-selection models
- NLMEs with lags: multiple-dose pharmacokinetic models and more
- Heteroskedastic ordered probit
- Graph sizes in inches, centimeters, and printer points
- Numerical integration in Mata
- Linear programming in Mata
- Do-file Editor: Autocompletion, syntax highlighting, and more
- Stata for Mac: Dark Mode and tabbed windows
- Set matsize obviated

Number 22 is not a link because it’s not a highlight. I added it because I suspect it will affect the most Stata users. It may not be enough to make you buy the release, but it will half tempt you. Buy the update, and you will never again have to type

. set matsize 600

And if you do type it, you will be ignored. Stata just works, and it uses less memory.

Oh, and in Stata/MP, Stata matrices can now be up to 65,534 *x* 65,534, meaning you can fit models with over 65,000 right-hand-side variables. Meanwhile, Mata matrices remain limited only by memory.

Here are my comments on the highlights.

There are two parts to our implementation of lasso: prediction and inference. I suspect inference will be of more interest to our users, but we needed prediction to implement inference. By the way, when I say lasso, I mean lasso, elastic net, and square-root lasso, but if you want a features list, click the title.

Let’s start with lasso for prediction. If you type

. lasso linear y x1 x2 x3 ... x999

**lasso** will select the covariates from the **x**‘s specified and fit the model on them. **lasso** will be unlikely to choose the covariates that belong in the true model, but it will choose covariates that are collinear with them, and that works a treat for prediction. If English is not your first language, by “works a treat”, I mean great. Anyway, the **lasso** command is for prediction, and standard errors for the covariates it selects are not reported because they would be misleading.

Concerning inference, we provide four lasso-based methods: double selection, cross-fit partialing out, and two more. If you type

. dsregress y x1, controls(x2-x999)

then, *conceptually but not actually*, **y** will be fit on **x1** and the variables **lasso** selects from **x2-x999**. That’s not how the calculation is made because the variables lasso selects are not identical to the true variables that belong in the model. I said earlier that they are correlated with the true variables, and they are. Another way to think about selection is that **lasso** *estimates* the variables to be selected and, as with all estimation, that is subject to error. Anyway, the inference calculations are robust to those errors. Reported will be the coefficient and its standard error for **x1**. I specified one variable of special interest in the example, but you can specify however many you wish.

2. Reproducible and automatically updating reports

The inelegant title above is trying to say (1) reports that reproduce themselves just as they were originally and (2) reports that, when run again, update themselves by running the analysis on the latest data. Stata has always been strong on both, and we have added more features. I don’t want to downplay the additions, but neither do I want to discuss them. Click the title to learn about them.

I think what’s important is another aspect of what we did. The real problem was that we never told you how to use the reporting features. Now we do in an all-new manual. We tell you and we show you, with examples and workflows. Here’s a link to the manual so you can judge for yourself.

Stata is known for its community-contributed meta-analysis. Now there is an official StataCorp suite as well. It’s complete and easy to use. And yes, it has funnel plots and forest plots, and bubble plots and L’Abbé plots.

4. Revamped and expanded choice modeling (margins works everywhere)

Choice modeling is jargon for conditional logit, mixed logit, multinomial probit, and other procedures that model the probability of individuals making a particular choice from the alternatives available to each of them.

We added a new command to fit mixed logit models, and we rewrote all the rest. The new commands are easier to use and have new features. Old commands continue to work under version control.

**margins** can now be used after fitting any choice model. **margins** answers questions about counterfactuals and can even answer them for any one of the alternatives. You can finally obtain answers to questions like, “How would a $10,000 increase in income affect the probability people take public transportation to work?”

The new commands are easier to use because you must first **cmset** your data. That may not sound like a simplification, but it simplifies the syntax of the remaining commands because it gets details out of the way. And it has another advantage. It tells Stata what your data should look like so Stata can run consistency checks and flag potential problems.

Finally, we created a new *[CM] Choice Modeling Manual*. Everything you need to know about choice modeling can now be found in one place.

5. Integration of Python with Stata

If you don’t know what Python is, put down your quill pen, dig out your acoustic modem and plug it in, push your telephone handset firmly into the coupler, and visit Wikipedia. Python has become an exceedingly popular programming language with extensive libraries for writing numerical, machine learning, and web scraping routines.

Stata’s new relationship with Python is the same as its relationship with Mata. You can use it interactively from the Stata prompt, in do-files, and in ado-files. You can even put Python subroutines at the bottom of ado-files, just as you do Mata subroutines. Or put both. Stata’s flexible.

Python can access Stata results and post results back to Stata using the Stata Function Interface (sfi), the Python module that we provide.

6. Bayesian predictions, multiple chains and more

We have lots of new Bayesian features.

We now have multiple chains. Has the MCMC converged? Estimate models using multiple chains, and reported will be the maximum of Gelman-Rubin convergence diagnostic. If it has not yet converged, do more simulations. Still hasn’t converged? Now you can obtain the Gelman-Rubin convergence diagnostic for each parameter. If the same parameter turns up again and again as the culprit, you know where the problem lies.

We now provide Bayesian predictions for outcomes and functions of them. Bayesian predictions are calculated from the simulations that were run to fit your model, so there are a lot of them. The predictions will be saved in a separate dataset. Once you have the predictions, we provide commands so that you can graph summaries of them and perform hypothesis testing. And you can use them to obtain posterior predictive *p*-values to check the fit of your model.

There’s more. Click the title.

7. Extended regressions models (ERMS) for panel data

ERMs fits models with problems. These problems can be any combination of (1) endogenous and exogenous sample selection, (2) endogenous covariates, also known as unobserved confounders, and (3) nonrandom treatment assignment.

What’s new is that ERMs can now be used to fit models with panel (2-level) data. Random effects are added to each equation. Correlations between the random effects are reported. You can test them, jointly or singly. And you can suppress them, jointly or singly.

Ermistatas got a fourth antenna.

8. Importing of SAS and SPSS datasets

New command **import sas** imports *.sas7bdat* data files and *.sas7bcat* value-label files.

New command **import spss** imports IBM SPSS version 16 or higher *.sav* and *.zsav* files.

I recommend using them from their dialog boxes. You can preview the data and select the variables and observations you want to import.

9. Flexible nonparametric series regression

New command **npregress series** fits models like

*y* = g(*x*_{1}, *x*_{2}, *x*_{3}) + *ε*

No functional-form restrictions are placed on g(), but you can impose separability restrictions. The new command can fit

*y* = g_{1}(*x*_{1}) + g_{2}(*x*_{2}, *x*_{3}) + *ε*

*y* = g_{1}(*x*_{1}, *x*_{2}) + g_{3}(*x*_{3}) + *ε*

*y* = g_{1}(*x*_{1}, *x*_{3}) + g_{2}(*x*_{2}) + *ε*

and even fit

*y* = *b*_{1}*x*_{1} + g_{2}(*x*_{2}, *x*_{3}) + *ε*

*y* = *b*_{1}*x*_{1} + *b*_{2}*x*_{2} + g_{3}(*x*_{3}) + *ε*

I mentioned that lasso can perform inference in models like

. dsregress y x1, controls(x2-x999)

If you know that variables **x12**, **x19**, and **x122** appear in the model, but do not know the functional form, you could use **npregress series** to obtain inference. The command

. npregress series y x12 x19 x122, asis(x1)

fits

*y* = *b*_{1}*x*_{1} + g_{2}(*x*_{12}, *x*_{19}, *x*_{122}) + *ε*

and, among other statistics, reports the coefficient and standard error of *b*_{1}.

10. Multiple datasets in memory, meaning frames.

I’m a sucker for data management commands. Even so, I do not think I’m exaggerating when I say that frames will change the way you work. If you are not interested, bear with me. I think I can change your mind.

You can have multiple datasets in memory. Each is stored in a named frame. At any instant, one of the frames is the current frame. Most Stata commands operate on the data in the current frame. It’s the commands that work across frames that will change the way you work, but before you can use them, you have to learn how to use frames. So here’s a bit of me using frames:

. use persons. frame create counties. frame counties: use counties. tabulate cntyid. frame counties: tabulate cntyid

Well, I’m thinking at this point, it appears I could merge persons.dta with counties.dta, except I’m not thinking about merging them. I’m thinking about linking them.

. frlink m:1 cntyid, frame(counties)

Linking is frame’s equivalent of **merge**. It does not change either dataset except to add one variable to the data in the current frame. New variable **counties** is created in this case. If I were to drop the variable, I would eliminate the link, but I’m not going to do that. I’m curious whether the counties in which people reside in persons.dta were all found in counties.dta. I can find out by typing

. count if counties==.

If 1,000 were reported, I would now **drop counties**, and it would be as if I had never linked the two frames.

Let’s assume **count** reported 0. Or 4, which is a small enough number that I don’t care for this demonstration. Now watch this:

. generate relinc = income / frget(counties, medinc)

I just calculated each person’s income relative to the median income in the county in which he or she resides, and median income was in the counties dataset, not the persons dataset!

Next, I will copy to the current frame all the variables in **counties** that start with **pop**. The command that does this, **frget**, will use the link and copy the appropriate observations.

. frget pop*, from(counties). describe pop*. generate ln_pop18plus = ln(pop18plus). generate ln_income = ln(income). correlate ln_income ln_pop18plus

I hope I have convinced you that frames are of interest. If not, this is only one of the five ways frames will change how you work with Stata. Maybe one of the other four ways will convince you. Visit the overview of frames page at stata.com.

11. Sample-size analysis for confidence intervals

The goal is to optimally allocate study resources when CIs are to be used for inference or, said differently, to estimate the sample size required to achieve the desired precision of a CI in a planned study. One mean, two independent means, or two paired means. Or one variance.

DSGE stands for Dynamic Stochastic General Equilibrium. Stata previously fit linear DSGEs. Now it can fit nonlinear ones too.

I know this either interests you or does not, and if it does not, there will be no changing your mind. It interests me, and what makes the new feature spectacular is how easy models are to specify and how readable the code is afterwards. You could almost teach from it. If this interests you, click through.

IRT (Item Response Theory) is about the relationship between latent traits and the instruments designed to measure them. An IRT analysis might be about scholastic ability (the latent trait) and a college admission test (the instrument).

Stata 16’s new IRT features produce results for data containing different groups of people. Do instruments measure latent traits in the same way for different populations?

Here is an example. Do students in urban and rural schools perform differently on a test intended to measure mathematical ability? Using Stata 16, you can fit a 2-parameter logistic model comparing the groups by typing

. irt 2pl item1-item10, group(urbanrural)

What’s new is the **group()** option.

Does an instrument measuring depression perform the same today as it did five years ago? You can fit a graded-response model that compares the groups by typing

. irt grm item-item10, group(timecategory)

And IRT’s postestimation graphs have been updated to reveal the differences among groups when a **group()** model has been fit.

The examples I mentioned both concerned two groups, but IRT can handle any number of them.

14. Panel-data Heckman-selection models

Heckman selection adjusts for bias when some outcomes are missing not at random.

The classic example is economists’ modeling of wages. Wages are observed only for those who work, and whether you work is unlikely to be random. Think about it. Should I work or go to school? Should I work or live off my meager savings? Should I work or retire? Few people would be willing to make those decisions by flipping a coin.

If you worry about such problems and are using panel data, the new **xtheckman** command is the solution.

15-21. Seven more new features

I will summarize the last seven features briefly. My briefness makes them no less important, especially if they interest you.

**15. NLMEs with lags: multiple-dose pharmacokinetic models and more** can now be fit by Stata’s **menl** command for fitting nonlinear mixed-effects regression. This includes fitting multiple-dose models.

**16. Heteroskedastic ordered probit** joins the ordered probit models that Stata already could fit.

**17. Graph sizes in inches, centimeters, and printer points** can now be specified. Specify **1in**, **1.4cm**, or **12pt**.

**18.** Programmers: **Mata’s new Quadrature class** numerically integrates *y* = f(*x*) over the interval *a* to *b*, where *a* may be -∞ or finite and *b* may be finite or +∞.

**19.** Programmers: **Mata’s new Linear programming class** solves linear programs using an interior-point method. It minimizes or maximizes a linear objective function subject to linear constraints (equality and inequality) and boundary conditions.

**20. Do-file Editor: Autocompletion and more**. The editor now provides syntax highlighting for Python and Markdown. And it autocompletes Stata commands, quotes, parentheses, braces, and brackets. Last but not least, spaces as well as tabs can be used for indentation.

**21. Stata for Mac: Dark Mode and tabbed windows.** Dark mode is a color scheme that darkens background windows and controls so that they do not cause eye strain or distract from what you are working on. Stata now supports it. Meanwhile, tabbed windows conserve screen real estate. Stata has lots of windows. With the exception of the Results window, they come and go as they are needed. Now you can combine all or some into one. Click the tab, change the window.

**That’s it**

The highlights are 58% of what’s new in Stata 16, measured by the number of text lines required to describe them. Here is a sampling of what else is new.

**ranksum**has new option**exact**to specify that exact*p*-values be computed for the Wilcoxon rank-sum test.- New setting
**set iterlog**controls whether estimation commands display iteration logs. **menl**has new option**lrtest**that reports a likelihood-ratio test comparing the nonlinear mixed-effects model with the model fit by ordinary nonlinear regression.- The
**bayes:**prefix command now supports the new**hetoprobit**command so that you can fit Bayesian heteroskedastic ordered probits. - The
**svy:**prefix works with more estimation commands, namely, existing command**hetoprobit**and new commands**cmmixlogit**and**cmxtmixlogit**. - New command
**export sasxport8**exports datasets to SAS XPORT Version 8 Transport format. - New command
**splitsample**splits data into random samples. It can create simple random samples, clustered samples, and balanced random samples. Balance splitting can be used for matched-treatment assignment.

I could go on. Type **help whatsnew15to16** when you get your copy of Stata 16 to find out all that’s new.

I hope you enjoy Stata 16.