
The Risk team is responsible for overall risk framework and strategy for the organization. The team develops various risk models, arrives at various sector reports from a risk perspective and provides in depth analysis and recommendation around risk profiles of various clients.
In this role, the person is responsible for playing a leadership role in the Risk function to identify, assess, monitor and make timely and measured judgments on all current and potential future risks faced, in order to protect the organization’s interest, assets and reputation of the group along with growing shareholder value and profitability.
Responsibilities
- Manage risk for the retail and wholesale portfolio. Should have managed organization level Risk Policies, Credit Underwriting and Collection functions.
- Manage risk framework for the organization - including updating, establishing new risk management mechanisms, to identify, assess, monitor, measure and control a broad spectrum of risks. Ensure all policies and frameworks are kept updated and deployed to the optimum level.
- Develop continuous monitoring and improvement of the quality of the organization's credit and lending portfolio through credit policy direction and implementation.
- Possess a complete understanding the business. Evaluate, improve, and monitor the business, including assisting in reporting to the Board and Board Committees and providing leadership in the effectiveness of credit risk management controls, systems, and processes across the organization.
- Lead and mentor a team of analysts.
- Possess 7+ years of relevant experience in financial services industry in a risk role.
- Exposure to both wholesale and retail risk in a banking environment will be preferred. Some experience in a business / sales role within the banking / NBFC space will be an added advantage.
- Should have held a position of authority in terms of allocation and appropriating limits.
- Sector understanding in financial institution space and good knowledge of interpreting financial trends.
- High proficiency in Python and SQL/NoSQL.
- Experience with big data and cloud computing e.g., PySpark, data lake is a plus.
- Outstanding written, verbal and interpersonal communication skills.
- Ability to speak at various levels of an organization (CXOs to analysts).
- Have demonstrated resilience – stayed with companies and taking businesses to scale, have preferably not ‘hopped jobs. Strong organizational skills and excellent follow-through.
- Present / articulate / position an idea compellingly and ability to work in a fast-paced dynamic environment.
- Excellent communication and presentation skills.
- Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance and focus on high impact work.
- Learns continuously; Seeks out knowledge, ideas and feedback.

About Kaleidofin
About
Connect with the team
Similar jobs
We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.
Key Responsibilities:
- Design, develop, test, and maintain scalable ETL data pipelines using Python.
- Work extensively on Google Cloud Platform (GCP) services such as:
- Dataflow for real-time and batch data processing
- Cloud Functions for lightweight serverless compute
- BigQuery for data warehousing and analytics
- Cloud Composer for orchestration of data workflows (based on Apache Airflow)
- Google Cloud Storage (GCS) for managing data at scale
- IAM for access control and security
- Cloud Run for containerized applications
- Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
- Implement and enforce data quality checks, validation rules, and monitoring.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
- Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
- Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
- Document pipeline designs, data flow diagrams, and operational support procedures.
Required Skills:
- 4–8 years of hands-on experience in Python for backend or data engineering projects.
- Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
- Solid understanding of data pipeline architecture, data integration, and transformation techniques.
- Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
- Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).

- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good
understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least
one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision
is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have exposure and working knowledge in AI environment with Machine learning experience
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis,
MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana,
Graylog, StatsD, Datadog etc
What You'll Do
You will be part of our data platform & data engineering team. As part of this agile team, you will work in our cloud native environment and perform following activities to support core product development and client specific projects:
- You will develop the core engineering frameworks for an advanced self-service data analytics product.
- You will work with multiple types of data storage technologies such as relational, blobs, key-value stores, document databases and streaming data sources.
- You will work with latest technologies for data federation with MPP (Massive Parallel Processing) capabilities
- Your work will entail backend architecture to enable data modeling, data queries and API development for both back-end and front-end data interfaces.
- You will support client specific data processing needs using SQL and Python/Pyspark
- You will integrate our product with other data products through Django APIs
- You will partner with other team members in understanding the functional / non-functional business requirements, and translate them into software development tasks
- You will follow the software development best practices in ensuring that the code architecture and quality of code written by you is of high standard, as expected from an enterprise software
- You will be a proactive contributor to team and project discussions
Who you are
- Strong education track record - Bachelors or an advanced degree in Computer Science or a related engineering discipline from Indian Institute of Technology or equivalent premium institute.
- 2-3 years of experience in data queries, data processing and data modeling
- Excellent ANSI SQL skills to handle complex queries
- Excellent Python and Django programming skills.
- Strong knowledge and experience in modern and distributed data stack components such as the Spark, Hive, Airflow, Kubernetes, Docker etc.
- Experience with cloud environments (AWS, Azure) and native cloud technologies for data storage and data processing
- Experience with relational SQL and NoSQL databases, including Postgres, Blobs, MongoDB etc.
- Familiarity with ML models is highly preferred
- Experience with Big Data processing and performance optimization
- Should know how to write modular, optimized and documented code.
- Should have good knowledge around error handling.
- Experience in version control systems such as GIT
- Strong problem solving and communication skills.
- Self-starter, continuous learner.
Good to have some exposure to
- Start-up experience is highly preferred
- Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc.
- Agile software development methodologies.
- Working in multi-functional, multi-location teams
What You'll Love About Us – Do ask us about these!
- Be an integral part of the founding team. You will work directly with the founder
- Work Life Balance. You can't do a good job if your job is all you do!
- Prepare for the Future. Academy – we are all learners; we are all teachers!
- Diversity & Inclusion. HeForShe!
- Internal Mobility. Grow with us!
- Business knowledge of multiple sectors
Full Time position
Work Location:Hyderabad
Experience level: 3 to 5 years
Mandatory Skills:Python, Django/Flask and Rest API
Package:Upto 20 LPA
Job Description:
--Experience in web application development using Python, Django/Flask.
--Proficient in developing REST API's, Integration and familiar with JSON formatted data.
--Good to have knowledge in front-end frameworks like Vue.js/Angular/React.js
--Writing high quality code with best practices based on technical requirement.
--Hands-on experience in analysis, design, coding, and implementation of complex, custom-built software products.
--Should have experience in Database, preferably Redis.
--Experience in working with Git or equivalent code management / version control system with best practices.
--Good to have knowledge in Elasticsearch, AWS, Docker.
--Should have interest to explore and work on Cyber Security domain.
--Experience with Agile development methods.
--Should have strong analytical and logical skills.
--Should be good at fundamentals: Data Structures, Algorithms, Programming Languages, Distributed Systems, and Information retrieval.
--Should have good communication skills and client facing experience.
Job Description
Viaan Industries Limited is looking for a Software Developer who is motivated to combine the art of design with the art of programming.The ideal candidate for this position will have a broad technical skill set and extensive experience in this industry. As a result, the candidate should be able to design, develop, test and deploy the products required for the company's needs. Moreover, the candidate should be able to work with other developers in determining product strategy.
Responsibilities
- Own the product : Design, Develop & Deploy
- Ensure Quality & sustainability of the architecture
- Obsess about code quality, automated testing, continuous integration, code reviews, and documentation
- Provide quick & creative solutions for day-to-day operational issues
- Assure that all user input is validated before submitting to back-end.
- Ensure the technical feasibility of UI/UX designs.
Required Skills
- Proficient in Server Side Programming Languages :- PHP (Laravel)
- Proficient in design & architecting scalable products.
- Must have hands on experience in Ajax, Jquery.
- Database knowledge – MongoDb.
- Knowledge of Development Tools Bitbucket, Git, CI/CD with BitBucket and JIRA
- Web Server technologies:- Apache, Nginx
- Solid foundation in data structures, algorithms, and system design
- Expert in HTTP terminologies such as Request/Response cycle, content negotiation, CORS etc
- Management of hosting environment, including database administration and scaling an application to support load changes
- Optimization of the application for maximum speed and scalability
- Implementation of security and data protection
- Setup and administration of backups
- Obsess about code quality, automated testing, continuous integration, code reviews, and documentation.
- Exposure to microservices architecture/ API concepts would be an added advantage.
- Good organizational and time-management skills
Good To Have
- Knowledge of NodeJs, Angular
- Familiarity with AWS products - Beanstalk, Elb, ECS, EC2, SNS, SQS, S3, RDS, etc.
Qualifications
- Bachelor’s degree or equivalent in computer science / engineering
- Experience level of 6 to 9 years of experience in very large-scale applications.
- Strong problem-solving skills, computer science fundamentals, data structures and
- algorithms their space & time complexities
- Design (LLD & HLD) and architect technical solutions for the business problems of a very large-scale portal.
- Strong hands-on and practical working experience with Java as the programming language
- Strong debugging skills - Code, Logs, DB, JVM
- Excellent coding skills - should be able to convert design into code fluently.
- Hands-on experience working with Databases and Linux platform
Candidate must have experience with start up product based companies.
Opportunity is with the client from E Mobility domain.








