We are problem solvers
Our client is a major telecom organization with revolutionary products, a proven process, and 30 years of delivering hosted technology solutions to complex business problems in US.
Considering our client’s use of classical models to identify fraudulent calls, their customers were still losing over 650 million dollars. At Satellitez, we built a learning algorithm (supervised, unsupervised, autoencoders) to identify fraud, 1 in 100,000 with over 80% catch rate and false positives below 1%. Our team of data scientists used deep neural nets machine learning with Amazon engines, Tensorflow and H2O autoencoder to perform the necessary experiments. At the end, the model was integrated into their cloud environment.
Our client is a state-owned bank with the objective to select potential customers for installment loans using machine learning algorithms to maximize revenue.
Prior to Satellitez modeling, our client had no system to identify customers with a potential of repaying the loan immediately after a large credit card activity. The main point of the project was to identify the right subset of transactors, so our client could make them a lower rate offer to avoid bouncing. Of course, this should be the case without making an offer to the revolvers segment.
Our data scientists used learning algorithms with clustering, correlation analysis, classification and time series modeling. The essential reproducible analysis contained the characteristics of sample customers, shopping categories identification, Box-Cox normalization, payoff period and programming in R and Python. The model was deployed on local VMs.
CREDIT LIMIT INCREASE
Our client was a major financial institute who had a fixed method to offer credit line increase to customers without the ability to change it based on customer’s current or past behavior.
The major contribution of our team in this project was the use of random forest and general linear model algorithms to select the right subset of customers. At Satellitez, we were able to increase the rate of success from 50% to 72% and then to 86%.
The steps of reproducible computation in Sagemaker included, training, validation, testing and usage of utilization compatible to our client’s standards and their methodology. Twelve different datasets from different sources containing multiple records for over 3 billions were available which required data integration and data reduction which was deployed on cloud.
Our client was a major tech organization with extremely powerful products, a proven process, and 50 years of delivering technology solutions to businesses in US.
Our client’s DevOps were receiving just over 200 alerts per minute with almost 90% noises. The use of operating human force to identify system alerts and to open tickets associated to each major outage was beyond inefficient. Our team of data scientists developed an unsupervised model with just three experiments to fit a model which can look at the changes of patterns which are under development to be an outage later and to predict them successfully.
This proactive modeling gives a competitive edge to our client over their competition to take proper action in minutes where things matter the most.