- Portfolio Backtesting here is basically based on R package fPortfolio. The method is designed for portfolio optimization, JFE provides more covariance estimators and GMVP strategy for backtesting. JFE offers a comprehensive computation(Backtesting All in One) for 6 covariance estimators combined with 2 strategies, which is a little bit time-consuming, 3-min for DJ30 dataset.
- To use this function, you must have a multivariate time series dataset with R format, xts is most encourgaed; and the file is saved in .RData or .rda.
- If the loaded data is price, then you have to pull down the menu and choose Transform Price Data, else, Load Returns Data.
- The Next-Month Advice is the output bottom is the assets weights suggestion computed by backtesting for the next period from the end of data. The rolling length is 1 month and estimation is 1 year, which are not allowed to change so far.
- Download the dataset DJ30.RData to practice, which is close price of Dow Jones 30 component Stocks.

**Backtesting**button, and access the pane below. First of all, you have to "Pick 1 bench asset", if you choose "None", JFE will compute cross-section average as Bench. Secondly, you may pick asset(s) you do not need. The following 4 blocks are its specification, we offer 3 multivariate covariance estimators: sample covariance, Ledoit-Wolf Baysian Shrinkage and the one based on Student t distribution.

Clicking OK, we give out an output in R console, "Next-Month Advice" is the diversified portfolio suggested by backtesting. For our dataset, it suggests more weights on DJI (Dow Jones Index) components.

Whether the backtesting strategy is good enough to follow, we can check the graph below: the middle right one shows our optimized portfolio (red line) outperforms the bench one where we pick Russia (RTS), although drawdown is not satisfactory.

Clicking "Backtesting All in One" pop out the pane below. Users have only to pick assets and portfolio risk, including risk free rate and smooth lambda, it will give a comprehensive computation including two strategies (Tangency & GMVP) and 6 multivariate covariance estimators. If dataset is not large, it may take roughly 3 minutes. If the dataset is somewhat large, for example, 50-year daily with 100 assets, it is time-consuming. In our experiment with parallel computation(Clusters), it takes 10 minutes.