Intelligent Credit Scoring. Siddiqi Naeem
Чтение книги онлайн.
Читать онлайн книгу Intelligent Credit Scoring - Siddiqi Naeem страница 8
Product Manager(s)
The product manager is responsible for the management of the company’s product(s) from a marketing or customer retention perspective. Their main objectives are usually revenue related, and they would have:
● Subject matter expertise in the development and implementation of product-marketing strategies.
● An in-depth knowledge of the company’s typical client base and target markets, including its best/most valued customers.
● Knowledge of future product development and marketing direction.
Product managers can offer key insights into the client base and assist during segmentation selection, selection of characteristics, and gauging impact of strategies. They also coordinate design of new application forms where new information is to be collected. Segmentation offers the opportunity to assess risk for increasingly specific populations – the involvement of marketing in this effort can ensure that scorecard segmentation is in line with the organization’s intended target markets. This approach produces the best results for the most valued segments and harmonizes marketing and risk directions. In other words, the scorecard is segmented based on the profile that a product is designed for, or the intended target market for that product, rather than based on risk considerations alone.
We will cover the gauging of impact on key customer segments post-cutoff selection in later chapters. This involves, for example, measuring metrics like expected approval rates for high-net-worth and similar premium customers. Marketing staff should be able to provide advice on these segments. The product managers typically do not have a say in the selection of final models or variables.
Operational Manager(s)
The operational manager is responsible for the management of departments such as Collections, Application Processing, Adjudication (when separate from Risk Management), and Claims. Any strategy developed using scorecards, such as changes to cutoff levels, will impact these departments. Operational managers have direct contact with customers, and usually have:
● Subject matter expertise in the implementation and execution of corporate strategies and procedures.
● An in-depth knowledge of customer-related issues.
● Experience in lending money.
Operational managers can alert the scorecard developers on issues such as difficulties in data collection and interpretation by frontline staff, impact on the portfolio of various strategies, and other issues relating to the implementation of scorecards and strategies.
A best practice that is highly recommended is to interview operational staff before starting the modeling project. For example, if analysts are looking to develop a mortgage application scorecard, they should go talk to adjudicators/credit analysts who approve mortgages, or other senior staff who have lending experience. Similarly, talking to collections staff is useful for those developing collections models. I normally ask them some simple questions. For applications models, I typically get about 8 to 10 approved and another 8 to 10 declined applications, and ask the adjudicator to explain to me why they were approved or declined. I often ask collectors if they can identify which debtors are likely to pay before talking to them, and which specific variables they use to decide. Staff from Adjudication, Collections, and Fraud departments can offer experience-based insight into factors that are predictive of negative behavior (i.e., variables that they think are predictive), which helps greatly when selecting characteristics for analysis, and constructing the “business optimal” scorecard. This is particularly useful for analysts who don’t have a lot of banking/lending experience, and for cases where you may be developing scorecards for a new product or a new market. In some cases, I have obtained ideas for some interesting derived variables from talking to adjudicators. In one example, when dealing with “thin” files, an adjudicator used the difference in days between the date of the first trade opened and the first inquiry as a measure of creditworthiness. The idea was that a creditworthy applicant would be able to get a credit product soon after applying, while a bad credit would take a while and several inquiries before getting money. Internationally, I have found a lot of nuances in lending, as well as uniquely local variables, from country to country simply by talking to bankers. In Western countries for example, the variable “Time at Address” is useful for younger people as they tend to live on their own soon after turning 18 or graduating. However, in other cultures where young people tend to live with their parents, often into middle age, a high number for that variable may not be fully indicative of financial stability for young people. Interviews with local bankers have helped me understand the data better and construct scorecards that would be most valuable to the business user.
Another good exercise to better understand the data is to spend some time where the data is created. For example, spending time in bank branches, as well as auto and mobile phone dealers, will help understand if and why certain fields are left blank, whether there is any data manipulation going on, and what the tolerance level is for filling out long application forms (relevant when determining the mix of self-declared versus bureau variables to have in the scorecard). This will help gauge the reliability of the data being studied.
In organizations where manual adjudication is done, or where historically applications have been manually adjudicated, interviewing the adjudicators also helps understand data biases. Manual lending and overriding biases data – understanding the policies and lending guidelines, as well as the personal habits of individual adjudicators – will help understand which variables are biased and how. This is similar to studying an organization’s policy rules to understand how its data is biased; for example, if above 85 percent loan to value (LTV), all decisions are exceptions only, then performance for all accounts with LTV greater than 85 percent will be biased and will appear to be much better than reality.
The objective here is to tap experience and discover insights that may not be obvious from analyzing data alone. This also helps interpret relationships later on and identify biases to be fixed. As mentioned earlier, superior knowledge of data leads to better scorecards – this exercise enables the analyst to apply business/experience to the data. Application scorecards are usually developed on data that may be more than two years old, and collections staff may be able to identify any trends or changes that need to be incorporated into analyses. This exercise also provides an opportunity to test and validate experience within the organization. For example, I have gone back to adjudicators and shown them data to either validate or challenge some of their experience-based lending ideas.
Model Validation/Vetting Staff
Model oversight function has always been an important part of the model development process. Their role has become even more critical with the introduction of banking regulations and model risk management guidelines in most countries. The role of model validation and key responsibilities are detailed in documents such as the Supervisory Letter 11-7 from the Federal Reserve Board (SR11-7)12 and Basel II Working Paper 14.13 Ideally, model validation should have:
● A good understanding of the mathematical and statistical principles employed in scorecard development.
● In-depth knowledge of corporate model validation policies, all relevant regulations, and the expectations of banking regulation agencies.
● Real-life experience in developing risk models and scorecards in financial institutions.
● A good understanding
12
Supervisory Guidance on Model Risk Management, Federal Reserve Bank, www.federalreserve.gov/bankinforeg/srletters/sr1107a1.pdf
13
Bank of International Settlements, Working Paper No. 14, 2005.