FinQuiz-ML-large

CFA Program exam, machine learning | New in 2019

Here is a summary of the new section ‘Machine Learning’ in reading number 8 of CFA exam, second level.

Section No. 7

Machine Learning

In real time, huge amount of data (commonly called Big Data) is being created by institutions businesses, governments, financial markets, individuals, and sensors (e.g. satellite imaging). Investors generally use big data information to find better investment opportunities.

Big data covers data from traditional and non-traditional sources. Analysis of big data is challenging because:

1. non-traditional data sources are often unstructured
2. theoretical methods do not perform well to establish relationships among the data at such a massive scale.

Machine learning (advanced computer techniques), computer algorithms and adaptable models are used to study relationships among the data. The information obtained from big data using such techniques is called data analysis (a.k.a ‘data analytics’).

Major Focuses of Data Analytics

Six focuses of data analytics include:

Measuring correlations

Determining synchronous relationship between variables i.e. how variables tend to covary.

Making predictions

Identifying variables that can help predict the value of variable of interest.

Making casual inferences

Casual inference focuses on determining whether an independent variable cause changes to the dependent variable. Casual inference is a stronger relationship between variables than that of correlation and prediction.

However, in real-world situation, estimating casual effect in the presence of confounding variables (variables that influence both dependent and independent variables) is challenging.

Classifying data

Classification focuses on classifying variables into various categories. Variables can be continuous variables (such as time, weight) or categorical variables (countable distinct groups). In case of categorical variables, the econometric model is called a classifier.

Many classification models are binary classifiers (two possible values 0 or 1 ), others are multi-category classifiers (such as ordinal or nominal). Ordinal variables follow some natural order (small, medium, large or low to high ratings etc.). Nominal variables do not follow any natural order (e.g. equity, fixed income, alternate).

Sorting data into clusters

Clustering focuses on sorting observations into various groups based on similar attributes or set of criteria that may or may not be pre-specified.

Reducing the dimension of data

Dimension reduction is a process of reducing number of independent variables while retaining variation across observations.

Dimension reduction when applied to data with large number of attributes makes easier to visualize the data on computer screens.
For out of sample forecasting, simple models perform better than complex models.

Dimension reduction improves performance by focusing on major factors that cause asset price movements in quantitative investment and risk management.

All these problems (prediction, clustering, dimension reduction, classification etc.) are often solved by machine learning methods.

What is Machine Learning?

Machine learning (ML) is a subset of artificial intelligence (AI).

Machine Learning (ML) uses statistical techniques that give computer systems the ability to act by learning from data without being explicitly programmed.

The ML program uses inputs from historical database, trends and relationships to discover hidden insights and pattern in data.

Types of Machine Learning

Two broad categories of ML techniques are:

1. Supervised learning

Supervised learning uses labeled training data (set of inputs supplied to the program), and process that information to find the output. Supervised learning follows the logic of ‘X leads to Y’.

For example, consider a ML program that predicts whether credit card transactions are fraudulent or not.

This is a binary classifier where the transaction is either fraudulent (value = 1) or non-fraudulent (value = 0).

The ML program collects input from the growing database of credit card transactions labeled ‘fraudulent’ or ‘non-fraudulent’ and learns the relationship from experience.

The performance is measured by the percentage of transactions accurately predicted.

2. Unsupervised learning

Unsupervised learning does not make use of labelled training data and does not follow the logic of ‘X leads to Y’. There are no outcomes to match to, however, the input data is analyzed, and the program discovers structures within the data itself.

One application of unsupervised learning is “clustering’ where program identifies similarity among data points and automatically splits data into groups based on their similar attributes.

Note:
Some additional ML categories are ‘deep learning’ (ML program using neural network with many hidden layers) and ‘reinforcement learning’ (ML program that learns from interacting with itself).

Machine Learning Vocabulary

General ML terminologies are different from the terms used in statistical modeling.

For example,
Y variable (dependent variable in regression analysis) is called target variable (or tag variable) in ML.

X variable (independent variable in regression analysis) is known as feature in ML.

In ML terminology, organizing features for ML processing is called feature engineering.

Machine Learning Algorithms

The following sections provide description of some important models and procedures categorized under supervised and unsupervised learning.

Models and Procedures Supervised Learning Penalized Regression CART Random Forests Neural Networks Unsupervised learning Clustering Algorithms Dimension Reduction

Neural networks are commonly included under supervised learning but are also important in reinforcement learning, which is a part of unsupervised learning.

Supervised Learning

Supervised leaning is divided into two classes based on the nature of the Y variable. These classes are regression and classification. Both classes use different ML techniques.

Regression

when the Y variable is continuous – supervised ML techniques include linear and non-linear models often used for prediction problems.

Classification

when the Y variable is categorical or ordinal – classification techniques include CART (classification and regression trees), random forests and neural networks.

Penalized Regression

Penalized regression is a computationally-efficient method used to solve prediction problems. Penalized regression (imposing penalty on the size of regression coefficients) improves prediction in large datasets by shrinking the number of independent variables and handling model complexity.

Penalized regression and other forms of linear regression, like multiple regression, are classified as special case of the generalized linear regression (GLM).

GLM is linear regression in which specification can be changed based on two choices:
1. Maximum number of independent variables the researcher wants to use;
2. How good model fit is needed?

In large datasets, algorithm starts modelling unnecessary complex relationships among many variables, and estimates an output that does not perform well on new data.

This problem is called overfitting. Penalized regression solves overfitting through regularization (by penalizing the statistical variability and magnitude of high dimensional data features). In prediction, parsimonious models (having less parameters) are less subject to overfitting.

Penalized regression is similar to ordinary linear regression with an added penalty which increases as the number of variables increase.

The purpose is to regularize the model such that only variables that explain Y should remain in the model. Penalized regressions are usually subject to a trade-off between contribution to model fit versus penalty.

Classification and Regression Trees

CART is a common supervised ML method that can be used for predicting classification or regression related modeling issues.

CART model is:

  • computationally efficient
  • adaptable to complex datasets
  • usually applied where the target is binary
  • useful for interpreting how observations are classified

CART model is represented by a binary tree (two-way branches). CART works on a pre-classified training data. Each node signifies a single input variable (X).

A classification tree is formed by splitting each node into two distinct subsets, and the process of splitting the derived subsets is repeated in a recursive manner.

Process ends when further splitting is not possible (observations cannot be divided into two district groups). The last node is called terminal node that holds a category based on attributes shared by observations at that node.

The chosen cut-off value (splitting value into two groups) is the one that decreases classification error, therefore, observations in each subsequent division have lower error within that group.

Some parts of the tree may turn out to be denser (a greater number of splits) while others simpler (a smaller number of splits).

Classification tree vs Regression Tree

Classification tree is used when the value of the target variable is categorical.
Regression tree is used when the value of the target variable is continues or numeric.

Classification Tree Example

Classification Tree Example CFA

Random Forests

Random forest, a ML technique that ensembles multiple decision trees together based on random selection of features that contribute more to the process in order to produce accurate and stable prediction.

Splitting a node in random forests is based on the best features from a random subsets of n features. Therefore, each tree marginally varies from other tress.

The power of this model is based on the idea of ‘wisdom of crowd’ and ensemble learning (using numerous algorithms to improve prediction).

All the classifier trees go for classification by majority vote for any new observation. The involvement of random subsets in the pool of classification trees prevents overfitting problem and also reduces the ratio of noise to signal.

CART and random forest techniques are useful to resolve classification problems in investment and risk management (such as predicting IPO performance, classifying info concerning positive and negative sentiments etc.).

Neural Networks

Neural networks are also known as artificial neural networks, or ANNs. Neural networks are appropriate for nonlinear statistical data and for data with complex connections among variables. Neural networks contain nodes that are linked to the arrows.

ANNs have three types of interconnected layers:

  • an input layer
  • hidden layers
  • an output layer

Input layer consists of nodes, and the number of nodes in the input layer represents the number of features used for prediction.

For example, the neural network shown below has an input layer with three nodes representing three features used for prediction, two hidden layers with four and three hidden nodes respectively, and an output layer.

For a sample network given below, the four numbers – 3,4,3, and 1 – are hyperparameters (variables set by humans that determine the network structure).

Sample: A Neural Network with Two Hidden Layers

A Neural Network with Two Hidden Layers CFA

Links (arrows) are used to transmit values from one node to the other. Nodes of the hidden layer(s) are called neurons because they process information.

Nodes assign weights to each connection depending on the strength and the value of information received, and the weights typically varies as the process advances.

A formula (activation function) is applied to inputs, which is generally nonlinear. This allows modeling of complex non-linear functions. Learning (improvement) happens through better weights being applied by neurons.

Better weights are identified by improvement in some performance measure (e.g. lower errors). Hidden layer feeds the output-layer.

Deep learning nets (DLNs) are neural networks with many hidden layers (often > 20 ). Advanced DLNs are used for speech recognition and image or pattern detection.

Unsupervised Learning

In unsupervised ML, we only have input variables and there is no target (corresponding output variables) to which we match the feature set. Unsupervised ML algorithms are typically used for dimension reduction and data clustering.

Clustering Algorithms

Clustering algorithms discover the inherent groupings in the data without any predefined class labels. Clustering is different from classification. Classification uses predefined class labels assigned by the researcher.

Two common clustering approaches are:

i) Bottom-up clustering:

Each observation starts in its own cluster, and then assemble with other clusters progressively based on some criteria in a non-overlapping manner.

ii) Top-down clustering:

All observations begin as one cluster, and then split into smaller and smaller clusters gradually.

The selection of clustering approach depends on the nature of the data or the purpose of the analysis. These approaches are evaluated by various metrics.

K-means Algorithm: An example of a Clustering Algorithm

K-means is a type of bottom-up clustering algorithm where data is partitioned into k-clusters based on the concept of two geometric ideas ‘Centroid’ (average position of points in the cluster) and ‘Euclidian’ (straight line distance between two points). The number of required clusters (k-clusters) must have been mentioned beforehand.

Suppose an analyst wants to divide a group of 100 firms into 5 clusters based on two numerical metrics of corporate governance quality.

Algorithm will work iteratively to assign suitable group (centroid) for each data point based on similarity of the provided features (in this case two-dimensional corporate governance qualities). There will be five centroid positions (initially located randomly).

Step 1. First step involves assigning each data point its nearest Centroid based on the squared Euclidian distance.

Step 2. The centroids are then recomputed based on the mean location of all assigned data points in each cluster.

The algorithm repeats step 1 and 2 many times until no movement of centroids is possible and sum of squared distances between points is minimum. The five clusters for 100 firms are considered to be optimal when average of squared-line distance between data points from centroid is at minimum.

However, final results may depend on the initial position selected for the centroids. This problem can be addressed by running algorithm many times for different initial positions of the centroids, and then selecting the best fit clustering.

Clustering is a valuable ML technique used for many portfolio management and diversification functions.

Dimension Reduction

Dimension reduction is another unsupervised ML technique that reduces the number of random variables for complex datasets while keeping as much of the variation in the dataset as possible.

Principal component analysis (PCA) is an established method for dimension reduction. PCA reduces highly correlated data variables into fewer, necessary, uncorrelated composite variables. Composite variables are variables that assemble two or more highly correlated variables.

The first principal component accounts for major variations in the data, after which each succeeding principal component obtains the remaining volatility, subject to constraint that it is uncorrelated with the preceding principal component.

Each subsequent component has lower information to noise ratio. PCA technique has been applied to process stock market returns and yield curve dynamics.

Dimension reduction techniques are applicable to numerical, textual or visual data.

Supervised Machine Learning: Training

The process to train ML models includes the following simple steps.

  1. Define the ML algorithm.
  2. Specify the hyperparameters used in the ML technique. This may involve several training cycles.
  3. Divide datasets in two major groups:
    Training sample (the actual dataset used to train the model. Model actually learns from this dataset).
    Validation sample (validates the performance of model and evaluates the model fit for out-of-sample data.)
  4. Evaluate model-fit through validation sample and tune the model’s hyperparameters.
  5. Repeat the training cycles for some given number of times or until the required performance level is achieved.

The output of the training process is the ‘ML model’.

The model may overfit or underfit depending on the number of training cycles e.g. model overfitting (excessive training cycles) results in bad out-of-sample predictive performance.

In step 3, the process randomly and repeatedly partition data into training and validation samples.

As a result, a data may be labeled as training sample in one split and validation sample in another split. ‘Cross validation’ is a process that controls biases in training data and improves model’s prediction.

Note: Smaller datasets entail more cross validation whereas bigger datasets require less cross-validation.

Now you should practice End of chapter( EOC )  Questions

Level II Examination Tips

The Level II CFA Program exam is infamous for being the most difficult of all three CFA exams. Nonetheless, a well-thought out study schedule, proper understanding of concepts, rigorous practice of sample questions, and dedication and commitment can greatly increase the probability of passing the exam. Here are a few tips to follow before and during the exam:


Preparation


  • Start early, at least six months before the exam. Broadly review each study session to determine your familiarity with the topics. This will also help you in assessing the workload and formulating a convenient study schedule.
  •  It is mandatory to dedicate at least 2-3 hours a day to studying in an undisturbed and study-conducive environment.
  • Always remember to practice questions after completing an LOS.
  • Remember to review the material as you progress. Test yourself with a comprehensive practice exam atleast once a month.
  • Try to finish covering all the material at least 1-1.5 months before the exam. Use this last month or so to revise the entire course at least two to three times. The key is to study effectively, manage your time, and retain the material you have learned over the past six months.
  • Attempt at least two full length mock exams 2-3 weeks before the exam. This will not only allow you to become used to writing two three-hour exams in one day, but will also assist you in managing time during the actual exam.


The Exam Day


  • Ensure that you understand the examination rules. Pack all items needed a few days before the exam. These would include sharpened pencils, a calculator, ID card/passport etc. Do not take any unnecessary items with you in the examination room as they will be a cause of inconvenience. It would be a good idea to check out the site where you will be taking the exam well in advance.
  • Make sure you have slept properly and are well rested the night before the exam. Revise all important formulas and ‘relationships’ (e.g. the relationship between callable bonds and interest rates) a day before the exam.
  • Have something light and energizing to eat for breakfast. Avoid taking too many liquids since you would not have time to take breaks during the exam.
  • During the mid-break, take a light snack. Try to relax but remember to remain focused. Avoid lengthy and dubious discussions with fellow candidates about the questions you attempted as it will be a futile exercise and may result in stress and panic. Just remember, you are still not over with the exam.


Attempting Questions


  • Manage your time. You will be required to attempt ten item set questions during each session (morning and evening).
  • Attempt the exam with utmost concentration. Invest in a pair of earplugs if you are easily distracted.
  • Start the exam with areas that you are proficient in. Always complete an area before moving on to the next. Do not push back a question as you will not have enough time to revisit it later. Do not recheck your answers even if you have the time.
  • When reading an item set, it is sometimes easier to solve each sub-part separately. If you have practiced well, you would know when one set of information has ended and the other has begun. Box all important information given in the vignette:
    • Look for key descriptive words, for instance, ‘leverage’, ‘guaranteed’, ‘riskless’ ‘open-ended, closed-ended’, ‘callable, putable’, words describing a time period like ‘short-term, intermediate term or long-term etc. Reading the vignette carefully will ensure that you do not overlook such details.
    • Sometimes an important concept/theory would be described in words; you may want to box and write its name next to it. If you have prepared well, your knowledge of the concept/theory will automatically flow in next.
    • Circle all numeric values and write a superscript stating what the values indicate. This will make it easier for you to form an equation and understand the context of the question. It will also help in identifying the ‘unknown’ values, and how they might be derived.
  • Once you have used a collection of information to answer a sub-part, it is highly unlikely that the same information would be used for another sub-part. This means that you may cast aside the information already utilized.
  • Sometimes the data might not perfectly fit a formula you have learned. In such rare situations, improvise, e.g. use D0 instead of D1 if no growth rate is mentioned or use the T-bill rate if no T-note rate is mentioned.
  • If you are unable to comprehend a question, try to make an intelligent guess as there is no penalty for guessing. It is unlikely that you will be completely clueless. The answer options can give you a hint as to whether you are thinking on the right track. This goes for both numerical as well as conceptual questions.
  • For statement questions, read the question regarding the statement before reading the statement itself, so that you know exactly what to look for. Generally you will have to determine:
    • The accuracy of the statement.
    • The reason for the inaccuracy of a statement.
    • A concept described in the statement (the question could directly address the concept or indirectly address it by inquiring about related concepts)
    • An unknown variable that could be calculated using the information in the statement.
    • Statement questions can be in a conversational format. While attempting such questions, remember to accurately identify the information that is being presented and address it accordingly.


Practice


  • The key to knowing how to attempt the Level II CFA Program exam is to practice exam-like questions extensively. This will not only serve as a guide to attempt the examination but will also help in channelizing your approach towards studying the curriculum. The questions will tell you ‘how to study’.
  • Although practicing questions can help, remember not to overdo it. Solving questions without proper preparation will do no good. It is imperative that you understand an LOS completely and thoroughly before attempting the related practice questions.  By doing so, you will be better equipped to analyze your performance and know your strengths and weaknesses. As you practice further, you will start learning the different ways in which a specific concept can be tested.


Time Management


  • Time management is critical to your success. It is important to manage your time during the exam preparation as well while writing the actual exam.
  • While preparing, spend more time on the areas that carry more weightage, or those that require more practice.


CFA Level 2 | CFA Exam Prep | Trusted by CFA charterholders

FinQuiz: The best CFA Prep System

CFA Level II Market-Based Valuation

The material here is fairly straight forward and easy points if you focus on the concepts and a few key formulas. The section is large enough and within an important topic area (Equity) that you can count on getting a few questions on the exam.


Be sure you know the advantages and disadvantages of each market multiple method as well as the general concepts behind the measures. We’ll cover these in today’s post. You must understand the Gordon Growth Model and be able to move things around to find different variables. Finally, Free-cash-flow-to-equity (FCFE) and Enterprise Value are fairly important formulas, of which we’ll cover on Friday.


Advantages and Disadvantages of Market Multiples


  • Relative value is relevant when picking stocks despite general market mood – despite the general trend in the market, there will still be over- and under-valued assets which can be found using relative valuation measures.
  • Easy to compute and understand – These measures (P/E, P/B, E/P) are some of the most widely used metrics. The math is easy to compute and the concept is intuitive. There is a risk that these measures are overly simplistic though.
  • May be difficult to compare companies across multiples without significant adjustments – Companies in different sectors and industries may vary greatly in their fundamentals, i.e. debt burden, payout ratio, growth and margin characteristics. This makes it inappropriate to compare many companies directly without some kind of adjustment.
  • Biggest disadvantage is multiples build in systematic errors- Even if one stock is extremely under-valued compared to another, if the general market or the sector is in an over-valued state then the asset may not truly be a good investment.


Comparables versus Forecasted Fundamentals


The material covers two forms of market-based valuation, comparables and forecasted fundamentals.


  • Comparables are relative value measures where a benchmark value is created through analysis or averaging usually the sector or industry average. This is then used to compare equities within that sector to determine relative over- or under-valuation.
  • Comparables provide an objective guide for valuation but provide no information on
  • Forecasted fundamentals use financial statement data (payout ratio, ROE, earnings, etc.) to find either a present value or future forecast of the asset price, most often cited in the curriculum is discounted cash flows (DCF). This tries to explain the ‘why’ in valuation.


Price to Earnings (P/E)


Price-Earnings is most often quoted on ‘trailing’ earnings or those over the last 12 months but can also be quoted on ‘leading’ earnings, those over the next 12 months, or a combination. Trailing earnings are more widely used and not subjective to forecasting errors but leading earnings should be used if the analyst expects a regime change in earnings.



With the P/E ratio, we can see the relationship between required return, growth, and the retention rate.  You need to be able to understand the relationships in this formula and be able to change things around to solve for different variables.


Advantages/Disadvantages of P/E


  • Simple and widely understood
  • Intuitive since investment value is derived from corporate earnings
  • Negative earnings will cause an error
  • Earnings can be volatile or transitory, making the measurement inconsistent
  • The biggest disadvantage is management’s incentive to manipulate earnings


Analysts may want to ‘normalize’ earnings by taking the average over an entire business cycle. This helps to reduce the short-term effects of business cycle changes on different industries. The curriculum discusses two ways to do this, the historical method and ROE method. Historical, is just the simple average of earnings over the cycle. The ROE method involves averaging the firm’s ROE over the cycle then multiplying by the current book value per share.


One important piece of vocabulary that often confuses candidates is the ‘justified’ P/E. This is simply the P/E based on forecasted fundamentals as follows:



Notice, it is the P/E that should result given the forecasted earnings and the company’s payout ratio.


Earnings Yield


The earnings yield is just the inverse of the P/E (Earnings divided by Price, E/P). The measurement will yield the same meaning as P/E but is useful when earnings are negative.


Price to Book Value (P/B)


Analysts are generally skeptical of income statement metrics because of the ease and incentive for manipulating earnings. Price to Book uses the Gordon Growth Model and incorporates book value in the following formula:



Remember, book value is total assets minus total liabilities and preferred equity then divided by common shares to get book value per share. You may need to adjust some balance sheet accounts because of different practices across firms.


  • intangible assets- patents should be included but goodwill not
  • assets usually carried at historical and should be marked to fair value
  • adjustments for off balance sheet items
  • LIFO vs FIFO adjustments and depreciation


Advantages/Disadvantages


  • Book value is more stable than earnings
  • Book value more appropriate for highly liquid companies (insurance, banking)
  • May be inappropriate to compare book values across industries because of differences in fundamentals


Price to Sales (P/S)


The final relative price multiple is price-to-sales, using net sales divided by number of shares. The main advantage here is that sales, or revenue, is less easily manipulated than earnings and always positive though revenue recognition practices can still distort the outcome. The main disadvantage is that P/S is often used to justify valuation based on expectations of future earnings profitability even when current earnings are negative. This happened in the years leading up to the dot-com bust. Analysts used P/S and inflated sales projections to value equities that would never live up to the high expectations and eventually crumbled as investors saw that the companies would not eventually be profitable.


Remember, this is just a quick review of the core concepts and formulas for the material. You need to actively study the study guide and question bank software to make sure you get this stuff down. Again, fairly straight forward material but no less important because it has a good chance of showing up on the exam.


happy studyin’


Joseph Hogue, CFA

How to Use Mock and Practice Exams to Pass the CFA Exams

There may be a confusion in semantics here, I use the term ‘practice exam’ to mean a graded test over more than one topic area but not necessarily over all topics or for the six-hour exam format. ‘Mock’ exams, however, are graded tests that incorporate all topic areas at the CFA Institute area weightings and with the full six-hour time window. The difference in definitions may be minimal, but it’s important.


You should be using practice exams throughout your studying to gauge your retention of the material. Mock exams are particularly important for two reasons:


  • Getting used to sitting down for six-hours and putting your brain through some tough mental gymnastics. Don’t underestimate this. Fatigue can be a big problem with the exams and I have heard candidates say that by the last couple of hours, they were just writing in anything to get it over.
  • Mock exams will help you follow your progress through each topic area, refining your study in each topic where you are weak. The level 1 and 2 CFA exams have clearly defined topic weights with some areas clearly the focus of the exams.


Six Times a Charm


About nine weeks out from the exam, I would clear my dance schedule for Saturdays from 9am-4pm. Each Saturday I would take my laptop to the library and complete a mock exam using a question bank. After the exam was done, the scores for overall and within each topic area went into a spreadsheet to track my progress week to week.


Going to a quiet (non-home) location and completing the mock exam is important for another reason, state-dependent learning. This is a peculiar little trick your brain plays. State-dependent learning is a concept where you are better able to recall information when placed in similar situations in which you learned the info. We won’t go into all the biology, but you should study and practice in a similar environment where you will be taking the test. This means no more curling up in bed with the curriculum while the tv hums in the background.


I usually ended up taking at least six of these mock exams. As each progressed, I could build a confidence band around the estimate for my percentage in each topic area. While you may struggle on a particular exam and see your percentage fall in a topic, your averaged scores will be more accurate and reliable (gotta love that Quant Methods section). We know that no candidate with a score of 70% has ever failed the exam, so you should be aiming for something above this. I would aim for +80% in the core topics for the level 1 or 2 exams (Ethics, FRA, Equity, Fixed Income) while looking for at least 70-75% in the other topics. If my average in any topic fell below that target any given week, I would review it the next week.


This is in addition to other studying I would be doing these last couple of months. Flash-cards are a great way to quickly work through the material and review. With the time left, you may not be able to work through the official curriculum but might be able to make another pass through condensed study guides or summaries.


Coming down to the wire, you really need to fine-tune your program to make sure you get max points on the exam.


‘Til next time. Happy studyin’


Joe Hogue, CFA

Peter Mackey, CFA Ex-Head, CFA Program & Examination Development, CFA Institute

Peter Mackey, CFA was head of CFA Program & Examination Development at CFA Institute for seven years from February 2011 to December 2017.


A month before leaving his job, Peter engaged with CFA candidates in an open discussion.


This discussion is the most valuable exam insight you can get from the most relevant source.


Click below for valuable discussion:


I am Peter Mackey, CFA, developer of the CFA exam. AMA from CFA


18 Amazing benefits FinQuiz Question-bank offers

FinQuiz Question Bank offers thousands of questions developed by our full time charter holders team members.


Compatibility


1. FinQuiz Question bank is compatible and works well with all operating systems (Android, Mac, PC, iPhone, iPad).


Creating a test:


2. Select your desired single or multiple study sessions or readings.
3. Further filter your Test through the following subcategories (see screen shot below)



4. Customize the Test (screen shot)



The Test:


5. Provide feedback (get answer from out team in no time)


6. Add your personal notes



7. Like Question


8. Zoom in (out)


9. Bookmark question


10. Review a single question’s explanation or all explanations



11. Select questions per page


12. Explanations are with references


13. Print your test



After the Test:


14. Review your selected questions and explanation


15. Search by Question ID  (Test summary screen shot below)



16. See how others are doing



17. Mistakes Overview (categorize your incorrect answers)



Performance Monitor:


18. Graphical view of your performance



Visit: www.finquiz.com/blog/shop

Level II CFA Program – Read questions or vignette first?

The Level II CFA Program exam is considered by most to be the most difficult of the three exams. Whereas the first exam was largely conceptual and tested your basic understanding of a broad range of information, Level II CFA Program exam takes that same broad range but tests detailed concepts and data interpretation. On top of this, the exam is extremely formula intense. You will be responsible for calculating two and three-part formulas in almost every study session.


Archives

CFA exam study material

For general inquiries, please write to us at info@finquiz.com. For pre-sales inquiries, get in touch at sales@finquiz.com.

CFA Institute does not endorse, promote or warrant the accuracy or quality of FinQuiz. CFA® and Chartered Financial Analyst® are registered trademarks owned by CFA Institute. BA II Plus is registered trademark owned by Texas Instruments.

Copyright © 2008-2018 FinQuiz:CFA Exam Prep. All rights reserved.
Terms and Conditions | Privacy Policy | info@finquiz.com

Code NOW80 | Limited-time discount for new candidates! | Products starting at $3.5 (a savings of 90%) 2 Days 5 Hours 26 Minutes 0 Seconds