
Domain - Credit risk / Fintech
Roles and Responsibilities:
1. Development, validation and monitoring of Application and Behaviour score cards
for Retail loan portfolio
2. Improvement of collection efficiency through advanced analytics
3. Development and deployment of fraud scorecard
4. Upsell / Cross-sell strategy implementation using analytics
5. Create modern data pipelines and processing using AWS PAAS components (Glue,
Sagemaker studio, etc.)
6. Deploying software using CI/CD tools such as Azure DevOps, Jenkins, etc.
7. Experience with API tools such as REST, Swagger, and Postman
8. Model deployment in AWS and management of production environment
9. Team player who can work with cross-functional teams to gather data and derive
insights
Mandatory Technical skill set :
1. Previous experience in scorecard development and credit risk strategy development
2. Python and Jenkins
3. Logistic regression, Scorecard, ML and neural networks
4. Statistical analysis and A/B testing
5. AWS Sagemaker, S3 , Ec2, Dockers
6. REST API, Swagger and Postman
7. Excel
8. SQL
9. Visualisation tools such as Redash / Grafana
10. Bitbucket, Githubs and versioning tools

About Vola Finance
About
We're a Fintech Platform that came into light in 2017 by providing our users with instant cash advances. Seven-plus years on, we've completed our Series A funding, and we are on track to do over $15 million in revenue this year. We're looking to grow the team and scale the Company further in every vertical.
In the last seven years, Vola Finance has broadened its offering to more than just your regular cash advance app. We aim to provide a comprehensive solution for managing personal finances, especially for those who might struggle with traditional banking services.
Company video


Connect with the team
Similar jobs
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
About Company:
The Organisation transform individuals and teams through holistic, heart-centric and engaging learning experiences that unleash their true potential and help organizations achieve business outcomes. We create customized, strong impact training programs that are conceptualized, designed and delivered by a core team of senior professionals.
Role & Responsibilities:
- Looking for someone who can think creatively about packaging marketing messages and is passionate about the brand. Experience in a marketing, communications or internal communications role with a marketing mindset is expected. The candidate will handle our communication portfolio, hence good problem solving, writing, administrative and PR skills are a must.
- Experienced into Lead Generation, Marketing, Advertising, Social Media Marketing, Digital Advertising, Database Generation,B2B Sales.
- Plan, organize, and implement market survey to obtain data that provides insight to market trends and consumer requirements
- Interpret data obtained from market research/survey to produce results useful in taking effective business decisions
- Carry out demographic surveys to identify potential customers
- Contact potential customers through emails, calls, and home visits to create product/service awareness
- Employ knowledge of company goals in carrying out marketing operations
- Establish good working relationships and rapport with clients to ensure continued patronage and minimal consumer attrition
- Support marketing managers in the development of pricing strategies to set suitable prices for products.
Requirements
1. Excellent written and oral communication skills and detail-oriented
2. Very good planning & project management skills
3. Experience in creating digital ads, video, and static across social media platforms such as LinkedIn, Facebook, YouTube, WhatsApp, e-mail, and display
4. Capable of managing a team & work independently with external vendors
"Hands-on experience with minimum 3 years of programming experience in JAVA 8 (or) 11.
Good experience using Springboot, Hibernate or JPA frameworks, Spring Security, Spring MVC (or) Spring Ioc (or) Spring AoP (or) any other spring framework.
Good experience in utilizing & working in Restful Webservices & Java Collection."
"Experience in Swagger, Microservices, Basic security, Design patterns.
Good Experience in utilising & working with Cosmos (or) MySQL."
| Experience | 4 - 8 Years |
Job Description
- Architect, build and maintain excellent React Native applications with clean code.
Roles and Responsibilities
- Work as part of a small team to build React Native iOS / Android applications for clients
- Implement pixel-perfect UI's that match designs.
- Implement clean, modern, smooth animations and transitions that provide an excellent user experience.
- Integrate third-party API's.
- Release applications to iOS and Google Play stores.
- Work with native modules when required.
- Complete two week sprints and participate in sprint retrospectives and daily standups.
- Assist with building estimates.
- Interface with clients via Slack, Zoom, and email.
Responsibilities:
• Take on complex problems that span multiple components and teams.
• Independently own one or more multiple modules, which include: requirement analysis, design, development, maintenance & support
• Write extensive, efficient code to address complex modules that handle the interaction between multiple components.
• Rapidly iterate to add new functionalities and solves complex problems with simple and intuitive solutions
• Produce architecture with clean interfaces, that are efficient and scalable
• Participate and contribute to architectural discussions
• Solve production issues. Investigate and provide solutions to minimize the business impact due to the outage
• Continuously improve performance metrics of modules you own.
• Collaborate effectively across teams to solve problems, execute and deliver results
Requirements:
• Experience: 3+ years
• A Bachelor's or Master's Degree in Computer Science
• Software engineering and product delivery experience, with a strong background in algorithms
• Experience in architecting & building real-time, large-scale e-commerce applications
• Experience with high-performance websites catering to millions of daily traffic is a plus
• Excellent command over Data Structures and Algorithms
• Experience with web technologies, Go/Java/Python
• Language: GO or Python
• Strong expertise in DS and Algo
• Strong leadership skills - have experience mentoring, building products from scratch or incumbent in design and architecture.
• Have worked in the scaling of the system right from scratch.
• Someone worked in small user base to a large user base and wrote optimized code
• Both HLD and LLD
You will be building product features solving complex business problems using state of the art technologies (Docker, Kubernetes, GCP, Python, Angular, ML/AI) in an ownership and integrity driven culture.
What will you do:
- Architect full-stack solutions for complex business requirements in a fast-paced environment while optimising for scalability, performance, concurrency, availability, security and code quality.
- Own and execute engineering projects end to end including API design, DB design, project planning, coding, and fluent communication with the rest of the team.
What we’re looking for:
- Bachelors/Masters in Computer Science or a related field
- Experience 3-6 years
- Strong ownership, integrity and drive to succeed individually and as a startup
- Good grasp on algorithms and data structures
- Focusing on developing new concepts and user experiences through rapid prototyping and collaboration with the best-in-class research and development team.
- Reading research papers and implementing state-of-the-art techniques for computer vision
- Building and managing datasets.
- Providing Rapid experimentation, analysis, and deployment of machine/deep learning models
- Based on requirements set by the team, helping develop new and rapid prototypes
- Developing end to end products for problems related to agritech and other use cases
- Leading the deep learning team
- MS/ME/PhD in Computer Science, Computer Engineering equivalent Proficient in Python and C++, CUDA a plus
- International conference papers/Patents, Algorithm design, deep learning development, programming (Python, C/C++)
- Knowledge of multiple deep-learning frameworks, such as Caffe, TensorFlow, Theano, Torch/PyTorch
- Problem Solving: Deep learning development
- Vision, perception, control, planning algorithm development
- Track record of excellence in the machine learning / perception / control, including patents, publications to international conferences or journals.
- Communications: Good communication skills
Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.
Responsibilities and Duties :
- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.
- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions
Education level :
- Bachelor's degree in Computer Science or equivalent
Experience :
- Minimum 3+ years relevant experience working on production grade projects experience in hands on, end to end software development
- Expertise in application, data and infrastructure architecture disciplines
- Expert designing data integrations using ETL and other data integration patterns
- Advanced knowledge of architecture, design and business processes
Proficiency in :
- Modern programming languages like Java, Python, Scala
- Big Data technologies Hadoop, Spark, HIVE, Kafka
- Writing decently optimized SQL queries
- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)
- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions
- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.
- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.
- Experience generating physical data models and the associated DDL from logical data models.
- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.
- Experience enforcing data modeling standards and procedures.
- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.
- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
Skills :
Must Know :
- Core big-data concepts
- Spark - PySpark/Scala
- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)
- Handling of various file formats
- Cloud platform - AWS/Azure/GCP
- Orchestration tool - Airflow










