50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
Embedos is looking for super heroes, who can help us succeed in our endeavour of becoming a beacon for providing problem solving Industrial IoT Solutions.
Location: MUMBAI
VACANCY: 3 - 4
Embedos makes Controllers –Interface devices and cloud based Software solutions for Remote Monitoring and Control, Industry 4.0 Applications.
We are looking for Engineering super heroes, who have a flare and interest in Core hardware / firmware / embedded software/ Networking and web technologies.
We would want engineers who have wide interests and want to work on multiple specializations. Functions in the embedded domain
• Hardware design small signal /Tele communication/interface electronics/Digital /Latest Microprocessors STM , ESP ,interfaces , I2C, SPI / Peripherals / Schematics /PCB Routing
• Programming languages for embedded devices respective IDEs s, debugging systems
• RTOS, Real time programming concepts.
• Linux Kernel programming, peripheral drivers.
• Communication protocols like Modbus, CAN, OPC other industrial protocols.
• Open source software, documentation, versioning systems.
• Web technology, Web applications, Networking technology, Cloud Interfacing.
We invite you to come and join in our Core team to make this endeavour a success and share the rewards.
Embedos is looking for Super Heroes to work on cutting edge technology involving interfacing IoT enabled Firmware, cloud computing software, generating exciting user interfaces, developing API’s, designing web app architectures, deploying re - usable code and the works.
Job Title: Inventory & Product Analytics Specialist
Location: Indore (M.P.)
Job Type: Full-Time
Job Summary
We are seeking a highly analytical and detail-oriented Inventory & Product Analytics Specialist to join our team. The successful candidate will be responsible for analyzing inventory data, monitoring product performance, and providing actionable insights to optimize inventory levels, improve product availability, and drive overall business performance. This role requires strong analytical skills, an understanding of inventory management, and the ability to work collaboratively across departments.
Key Responsibilities
- Inventory Management & Analysis:
- Monitor and analyze inventory levels to ensure optimal stock levels across various product categories.
- Identify slow-moving, overstocked, and out-of-stock items, and provide actionable recommendations to manage inventory flow.
- Develop and maintain inventory forecasting models using historical data, trends, and market demands.
- Conduct regular audits to ensure data accuracy in the inventory management system.
- Provide reports on key inventory metrics such as stock turnover, days of inventory, and reorder points.
- Product Analytics:
- Analyze product performance, including sales trends, profit margins, and product lifecycles, to drive decision-making.
- Track and report on key performance indicators (KPIs) related to product sales, including top-performing and underperforming items.
- Collaborate with product development, marketing, and procurement teams to evaluate product trends and suggest strategies to enhance product offerings.
- Assist in setting pricing strategies based on data analysis of product performance and market conditions.
- Data Reporting & Visualization:
- Create and maintain dashboards and reports to track inventory levels, product performance, and overall supply chain health.
- Present insights and recommendations to key stakeholders, including management, operations, and marketing teams.
- Provide ad-hoc data analysis as required by various departments.
- Process Improvement:
- Identify inefficiencies in inventory management processes and work with cross-functional teams to implement improvements.
- Suggest automation and technology solutions for inventory tracking and reporting.
- Stay updated on industry trends, inventory management tools, and analytics best practices.
Qualifications
- Education:
- Bachelor’s degree in Business, Supply Chain Management, Data Analytics, or a related field.
- Master’s degree is a plus.
- Experience:
2-5 years of experience in inventory management, product analytics, data analysis, or a similar role.
- Experience in retail, e-commerce, or manufacturing industries preferred.
- Technical Skills:
- Proficiency in Excel, SQL, and other data analysis tools (e.g., Python, R) is a plus.
- Experience with inventory management systems (e.g., SAP, Oracle, NetSuite) and product analytics platforms.
- Knowledge of data visualization tools (e.g., Power BI, Tableau, Looker).
- Key Competencies:
- Strong analytical and problem-solving skills.
- Excellent attention to detail and ability to work with large datasets.
- Effective communication skills, with the ability to present complex data in a clear and concise manner.
- Ability to work cross-functionally and collaboratively with various teams.
Benefits:
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
About Us
CallHub provides cloud based communication software for nonprofits, political parties, advocacy organizations and businesses. It has delivered over hundreds of millions of messages and calls for thousands of customers. It helps political candidates during their campaigns to get the message across to their voters, conduct surveys, manage event / town-hall invites and with recruiting volunteers for election campaigns. We are profitable with 8000+ paying customers from North America, Australia and Europe. Our customers include Uber, Democratic Party, major political parties in the US, Canada, UK, France and Australia.
About the Role
As a Senior Quality engineer in CallHub, you will be responsible for providing technical leadership to the quality team and expanding our automation goals advocating high quality user experiences without compromising the engineering velocity. Your primary role will be to design and develop the right test strategy including test automation and test plans that are targeted to uncover defects early to help build quality upstream and ensure quality products are reaching our customers. This includes adopting best practices and tools, driving product automation, suggesting new processes and policies, investigating customer reported issues, deduce patterns of issues and come up with solutions to address the technical challenges and work towards customer delight. You will act as a subject matter expert on everything we test and build. You are expected to understand the entire product workflow, customer experience and backend (API) and how everything affects each other and will act as the gatekeeper for quality. As part of the Engineering team, you will work with a team of highly technical software engineers, product managers and operation engineers to deliver great products that delight customers with exceptional product experience to users, both in terms of usability and performance. Our teams are small, yet incredibly impactful and we're curious, highly-motivated, engaged, and empowered to make a difference.
We're looking for engineers with strong analytical skills, meticulous attention to detail, attitude for perfection, knowledge of quality engineering processes who are inquisitive and eager to learn new technologies and love working in a dynamic environment. Your
Responsibilities
- Drive the overall quality & testing strategy including performance & resiliency testing with right test design, automation of test cases and test data ensuring all areas of the product are thoroughly tested and delight the customers by delivering defect free product
- Design & Development of automation test scripts (UI & API) using modern low-code/no-code tools integrating with CI/CD pipeline to achieve 100% automation
- Mentor the team with writing effective test strategy & design, test plan and test cases in solving complex quality issues
- Find scalable ways to automate functional, usability, performance, API, database and security testing
- Review product requirements and specifications and provide product & design feedback
- Track quality assurance metrics by analyzing and categorizing all customer reported bugs
- Quickly triage and test bug fixes on an ongoing basis
- Work in an agile environment, follow process guidelines and deliver tasks
- Participate in software architecture, design discussions and code reviews
- Be proactive, take ownership and be accountable
- Stay up-to-date with new testing tools and test strategies
What we’re looking for
- 1-2 yrs experience working as a Senior/Lead Quality Engineer
- 3+ yrs experience in automation testing of web applications (UI & API) using Selenium, Robot Framework, Rest Assured etc
- 5+ yrs experience in Software Quality, Testing & Automation
- Strong Knowledge of Jenkins, Git (Continuous Integration and Configuration Management)
- Knowledgeable with modern AI based low-code/no-code tools like Testsigma, Tosca, AccelQ, Katalon etc
- Experience in driving quality strategies & processes and guiding team to write clear and comprehensive test plans and test cases
- Attitude of breaking the system to make the system robust for users
- Detail oriented. Ability to empathize with customers
- The ability to work effectively in a fast-paced environment
- Team player with strong interpersonal skills, willing to ask for help and offer support to the rest of the team
- Good written and verbal communication skills
- BE/MS/MCA from reputed institutes in India or abroad
What you can look forward to
- You will get to see your work directly impacting users in a big way
- Freedom to contribute to multiple engineering disciplines (Development, Automation and DevOps)
- You will have the opportunity to work on the latest technologies as we are constantly innovating to provide reliable and scalable solutions for our customers.
- We value openness in the company and love delighting our customers.
Grey Chain seeks a highly skilled Generative AI Engineer with expertise in Python and GenAI (OpenAI, Langchain, Vector Databases, Retrieval-Augmented Generation (RAG), Chatbot development, or Agents). The ideal candidate will be responsible for working in innovation labs to build cutting-edge new features in the GenAI spaceKey.
Responsibilities:
- Develop and implement advanced features using LLM
- Improve the accuracy of LLM-based platforms by fine-tuning the models for specific use cases.
Qualifications:
- Bachelor’s degree in Computer Science, Data Science, or a related field.
- Over 5-7 years of experience in data science, deep learning, and machine learning.
- Hands-on experience in GENAISkills
- Proficiency in Python, langchain or PyTorch, and related technologies.
- Experience with LLM’s
- Familiarity with vector databases, cosine similarity algorithms, and MLOps.
- Expertise in agile methodologies and data science project lifecycles.
- Hands-on experience in GENAISkills
- Tech Stack: Programming Languages: Python
- Frameworks & Libraries: langchain, PyTorch, scikit-learn, pandas, NumPy, huggingface.
- Databases: Vector databases, PostgreSQL
- APIs: RESTful APIs, FastAPI
Well-established Pune-based NonStop io Technologies seeks extremely talented Python developers to help innovate in the software development space. You must have 2+ years experience building and maintaining Python applications with a focus on web development. You should have a strong understanding of web frameworks like Django or Flask, and front-end technologies. You must be able to write clean, scalable, and efficient code, and you should be able to collaborate effectively with team members.
You should have a Bachelor's degree in Computer Science or equivalent experience. Excellent communication skills are essential. Familiarity with cloud services, version control systems, and Agile practices would be beneficial but is not necessary.
Expect talented, motivated, intense, and interesting co-workers. Must be willing to work from Kharadi, Pune
Your compensation will include competitive salary and benefits.
We are an equal opportunity employer.
Founded by IIT Delhi Alumni, Convin is a conversation intelligence platform that helps organisations improve sales/collections and elevate customer experience while automating the quality & coaching for reps, and backing it up with super deep business insights for leaders.
At Convin, we are leveraging AI/ML to achieve these larger business goals while focusing on bringing efficiency and reducing cost. We are already helping the leaders across Health-tech, Ed-tech, Fintech, E-commerce, and consumer services like Treebo, SOTC, Thomas Cook, Aakash, MediBuddy, PlanetSpark.
If you love AI, understand SaaS, love selling and looking to join a ship bound to fly- then Convin is the place for you!
Responsibilities
- Designing and developing robust and scalable server-side applications using Python, Flask, Django, or other relevant frameworks and technologies.
- Collaborating with other developers, data scientists, and data engineers to design and implement RESTful APIs, web services, and microservices architectures.
- Writing clean, maintainable, and efficient code, and reviewing the code of other team members to ensure consistency and adherence to best practices.
- Participating in code reviews, testing, debugging, and troubleshooting to ensure the quality and reliability of applications.
- Optimising applications for performance, scalability, and security, and monitoring the production environment to ensure uptime and availability.
- Staying up-to-date with emerging trends and technologies in web development, and evaluating and recommending new tools and frameworks as needed.
- Mentoring and coaching junior developers to ensure they grow and develop their skills and knowledge in line with the needs of the team and the organisation.
- Communicating and collaborating effectively with other stakeholders, including product owners, project managers, and other development teams, to ensure projects are delivered on time and to specification
You are a perfect match, if you have these qualifications -
- Strong experience in GoLang or Python (server-side development frameworks such as Flask or Django)
- Experience in building RESTful APIs, web services, and microservices architectures.
- Experience in using database technologies such as MySQL, PostgreSQL, or MongoDB.
- Familiarity with cloud-based platforms such as AWS, Azure, or Google Cloud Platform.
- Knowledge of software development best practices such as Agile methodologies, Test-Driven Development (TDD), and Continuous Integration/Continuous Deployment (CI/CD).
- Excellent problem-solving and debugging skills, and the ability to work independently as well as part of a team.
- Strong communication and collaboration skills, and the ability to work effectively with other stakeholders in a fast-paced environment.
at REConnect Energy
Work at the Intersection of Energy, Weather & Climate Sciences and Artificial Intelligence
About the company:
REConnect Energy is India's largest tech-enabled service provider in predictive analytics and demand-supply aggregation for the energy sector. We focus on digital intelligence for climate resilience, offering solutions for efficient asset and grid management, minimizing climate-induced risks, and providing real-time visibility of assets and resources.
Responsibilities:
- Design, develop, and maintain data engineering pipelines using Python.
- Implement and optimize database solutions with SQL and NOSQL Databases (MySQL and MongoDB).
- Perform data analysis, profiling, and quality assurance to ensure high service quality standards.
- Troubleshoot and resolve data-pipeline related issues, ensuring optimal performance and reliability.
- Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
- Participate in code reviews and contribute to the continuous improvement of the codebase.
- Utilize GitHub for version control and collaboration.
- Implement and manage containerization solutions using Docker.
- Implement tech solutions to new product development, ensuring scalability, performance, and security.
Requirements:
- Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent.
- Proficient in Python programming skills and expertise with data engineering.
- Experience in databases including MySQL and NoSQL.
- Experience in developing and maintaining critical and high availability systems will be given strong preference.
- Experience working with AWS cloud platform.
- Strong analytical and data-driven approach to problem solving.
Client based at Bangalore location.
Data Science:
• Python expert level, Analytical, Different models works, Basic concepts, CPG(Domain).
• Statistical Models & Hypothesis , Testing
• Machine Learning Important
• Business Understanding, visualization in Python.
• Classification, clustering and regression
•
Mandatory Skills
• Data Science, Python, Machine Learning, Statistical Models, Classification, clustering and regression
About Zeni
Zeni is a new age, full-service finance firm, built from the ground up using AI & ML, for startups and small businesses. Zeni's AI and Finance Experts collaborate together to deliver 100% accurate accounting.
Certified Accountants & Zeni's AI deliver 100% accurate books that you and your investors can trust. Zeni offers a Finance Concierge available to you 24x7 Zeni pays any bill quickly and easily with bank transfers, debit cards or credit cards - even if your vendors only accept checks.
About the role:
We are looking for people that take quality as a point of pride. You will be a key member of the engineering staff working on our innovative FinTech product that simplifies the domain of finance management.
Responsibilities:
- You must be or like to be a Jack of all
- Design and build fault-tolerant, high-performance, scalable systems
- Design and maintain the core software components that support Zeni platform
- Improve the scalability, resilience, observe ability, and efficiency of our core systems
- Code using primarily Python.
- Work closely with, and incorporate feedback from, product management, platform architects and senior engineers.
- Fail fast, fix fast. Rapidly fix bugs and solve the problems
- Proactively look for ways to make Zeni platform better
- Speed, Speed, Speed - must be a performance freak!
Requirements:
- B. E. / B. Tech in Computer Science.
- 4 yrs to 8 yrs of commercial software development experience
- You have built some impressive, non-trivial web applications by hand
- Excellent programming skills in Python (Object Oriented is a BIG plus)
- Google App engine experience a huge plus
- Disciplined approach to testing and quality assurance
- Good understanding of web technologies (HTTP, Apache) and familiarity with Unix/Linux
- Good understanding of data structures, algorithms and design patterns
- Great written communication and documentation abilities
- Comfortable in a small, intense and high-growth start-up environment
- You know and can admit when something is not great.
- You can recognise that something you've done needs improvement
- Past participation in Hackathorns a big plus
- Startup experience or Product company experience is MUST.
- Experience integrating with 3rd party APIs
- Experience with Agile product development methodology
- Good at maintaining servers and troubleshooting
- Understanding of database query processing and indexing are preferred
- Experience with OAuth
- Experience with Google Cloud and/or Google App Engine platforms
- Experience writing unit tests
- Experience with distributed version control systems (eg: Git)
Looking for Python with React.
Python frameworks like Django or Flask.
Develop RESTful APIs or GraphQL endpoints
Company: CorpCare
Title: Head of Engineering/ Head of Product
Location: Mumbai (work from office)
CTC: Annual CTC Up to 25 Lacs
About Us:
CorpCare is India’s first all-in-one corporate funds and assets management platform. We offer a single-window solution for corporates, family offices, and HNIs. We assist corporates in formulating and managing treasury management policies and conducting reviews with investment committees and the board.
Job Summary:
The Head of Engineering will be responsible for overseeing the development, implementation, and management of our corporate funds and assets management platform. This role demands a deep understanding of the broking industry/Financial services industry, software engineering, and product management. The ideal candidate will have a robust background in engineering leadership, a proven track record of delivering scalable technology solutions, and strong product knowledge.
Key Responsibilities:
- Develop and communicate a clear engineering vision and strategy aligned with our broking and funds management platform.
- Conduct market research and technical analysis to identify trends, opportunities, and customer needs within the broking industry.
- Define and prioritize the engineering roadmap, ensuring alignment with business goals and customer requirements.
- Lead cross-functional engineering teams (software development, QA, DevOps, etc.) to deliver high-quality products on time and within budget.
- Oversee the entire software development lifecycle, from planning and architecture to development and deployment, ensuring robust and scalable solutions.
- Write detailed technical specifications and guide the engineering teams to ensure clarity and successful execution.
- Leverage your understanding of the broking industry to guide product development and engineering efforts.
- Collaborate with product managers to incorporate industry-specific requirements and ensure the platform meets the needs of brokers, traders, and financial institutions.
- Stay updated with regulatory changes, market trends, and technological advancements within the broking sector.
- Mentor and lead a high-performing engineering team, fostering a culture of innovation, collaboration, and continuous improvement.
- Recruit, train, and retain top engineering talent to build a world-class development team.
- Conduct regular performance reviews and provide constructive feedback to team members.
- Define and track key performance indicators (KPIs) for engineering projects to ensure successful delivery and performance.
- Analyze system performance, user data, and platform metrics to identify areas for improvement and optimization.
- Prepare and present engineering performance reports to senior management and stakeholders.
- Work closely with product managers, sales, marketing, and customer support teams to align engineering efforts with overall business objectives.
- Provide technical guidance and support to sales teams to help them understand the platform's capabilities and competitive advantages.
- Engage with customers, partners, and stakeholders to gather feedback, understand their needs, and validate engineering solutions.
Requirements:
- BE /B. Tech - Computer Science
- MBA a plus, not required
- 5+ years of experience in software engineering, with at least 2+ years in a leadership role.
- Strong understanding of the broking industry and financial services industry.
- Proven track record of successfully managing and delivering complex software products.
- Excellent communication, presentation, and interpersonal skills.
- Strong analytical and problem-solving abilities.
- Experience with Agile/Scrum methodologies.
- Deep understanding of software architecture, cloud computing, and modern development practices.
Technical Expertise:
- Front-End: React, Next.js, JavaScript, HTML5, CSS3
- Back-End: Node.js, Express.js, RESTful APIs
- Database: MySQL, PostgreSQL, MongoDB
- DevOps: Docker, Kubernetes, AWS (EC2, S3, RDS), CI/CD pipelines
- Version Control: Git, GitHub/GitLab
- Other: TypeScript, Webpack, Babel, ESLint, Redux
Preferred Qualifications:
- Experience in the broking or financial services industry.
- Familiarity with data analytics tools and methodologies.
- Knowledge of user experience (UX) design principles.
- Experience with trading platforms or financial technology products.
This role is ideal for someone who combines strong technical expertise with a deep understanding of the broking industry and a passion for delivering high-impact software solutions.
Join an Innovative Cybersecurity Startup
At our groundbreaking cybersecurity startup, we focus on helping customers identify, mitigate, and protect against ever-evolving cyber threats. In today's geopolitical climate, organizations must stay ahead of both malicious threat actors and nation-state actors. Our mission is to support cybersecurity teams overwhelmed by the growing number of threats by providing intelligent systems that prioritize the most significant risks.
We assist organizations in safeguarding their assets and customer data by continuously evaluating new threats and risks to their cloud environments. Our solutions enable rapid mitigation of high-priority threats, allowing engineers to spend more time innovating and delivering value to their customers.
About the Engineering Team
Our engineering team brings decades of experience from the security industry, having worked on cutting-edge technologies that have protected millions of customers. We have built technologies from the ground up, partnered with industry leaders on innovations, and addressed stringent customer requirements. Our expertise spans across software engineering, including data analytics, AI/ML processing, highly distributed and available services with real-time monitoring, and protocol-level work. You'll have the opportunity to learn from top engineering talent with multi-cloud expertise.
Key Responsibilities
- Develop and maintain web applications using front-end technologies like HTML, CSS, TypeScript, and React.
- Build and manage back-end services and analytics computation in Python or Golang, utilizing serverless technologies (e.g., Kubernetes, ECS Fargate).
- Design and implement database schemas, manage data storage solutions, and write efficient database queries.
- Use Terraform/CloudFormation to simulate customer environments or update our infrastructure following security best practices.
- Collaborate with cross-functional teams, including designers, product managers, and other developers, to deliver high-quality products.
- Write clean, maintainable, and efficient code adhering to best practices and coding standards.
- Implement automated testing and continuous integration processes to ensure software quality.
- Proactively add telemetry and logging to troubleshoot, debug, and resolve software issues.
- Stay current with emerging web technologies and industry trends.
Requirements
- Experience: 6+ years of professional experience in full-stack development.
- Front-End: Proficiency in HTML, CSS, TypeScript/JavaScript, and the React framework. Proven experience with graphical libraries, particularly for displaying large graph data (e.g., Graphviz).
- Back-End: Experience with server-side programming languages like Golang and Python.
- Cloud Technologies: Proficiency in AWS cloud technologies and experience in building dynamic, responsive UIs using GraphQL, Elasticache, AppSync, CloudFront.
- Databases: Knowledge of relational databases (e.g., MySQL, PostgreSQL) and/or NoSQL databases (e.g., MongoDB). Proficiency in data modeling, caching, and data lifecycle management.
- Version Control: Familiarity with version control systems, particularly Git.
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience).
- Skills: Strong problem-solving abilities, good communication skills, and the ability to work effectively in a collaborative team environment.
Preferred Qualifications
- Experience with other cloud services and technologies.
- Familiarity with DevOps practices and CI/CD pipelines.
- Knowledge of additional graphical libraries and tools.
- Experience with Agile/Scrum methodologies.
About Shopalyst:
Shopalyst offers a Discovery Commerce platform for digital marketers. Combining data, AI and deep integrations with digital media and e-commerce platforms, Shopalyst connects people with products they love. More than 500 marquee brands leverage our SaaS platform for data driven marketing and sales in 30 countries across Asia, Europe and Americas. We have offices in Fremont CA, Bangalore, and Trivandrum. Our company is backed by Kalaari Capital.
About the Role: Software Engineer (Python)
We are currently looking for people to join our Engineering team where internet scale, reliability, security, high performance and self-management drives almost every design decision that we take. This role will be based out of Thiruvananthapuram, Kerala.
We are looking for Software Engineers to help us build functional products and applications. Software Engineers responsibilities include participating in software design, writing clean and efficient code adhering to coding standards, guidelines & best practices for various applications, running tests to improve system functionality, performance & security and documenting design and code.
Responsibilities
- Collaborate with cross-functional teams to design and develop robust and scalable server-side applications using Node.js.
- Utilize Python to write batch jobs and scripts for various automation tasks, ensuring efficiency and reliability.
- Write clean, efficient, and well-documented code to deliver high-quality software solutions.
- Troubleshoot and resolve bugs and other performance issues to maintain system stability.
- Participate in code reviews to ensure code quality and compliance with coding standards.
- Collaborate with front-end developers to integrate user-facing elements with server-side logic.
- Mentor junior developers and share knowledge to foster a collaborative learning environment.
Must Have requirements
- Strong understanding of Node.js and its core concepts, such as event-driven. architecture, asynchronous programming, and callbacks.
- Proficiency in Python, with a strong understanding of its ecosystem, libraries, and frameworks.
- Proven 1-5 years of experience working as a Node.js/Python Developer or in a similar role.
- Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment.
- Excellent problem-solving and analytical skills, with the ability to troubleshoot complex technical issues.
- Strong working knowledge in any NoSQL (preferable)or relational databases
- Good exposure to working on Linux/Unix systems.
- Proficient understanding of code versioning tools (such as Git)
Nice to have requirements
- Proficiency in TypeScript to write scalable and maintainable code.
- Familiarity with cloud platforms like AWS.
- Basic understanding of GoLang/Java as a coding language
Alternative Path is looking for an application developer, to assist one of its clients, which is a SAAS platform helping alternative investment firms to streamline their document collection and data extraction process using Machine Learning. You will work with individuals in various departments of the company to define and craft new products and features for our platform, and to improve existing ones. You will have a large degree of independence and trust, but you won't be isolated, the support of the Engineering team leads, the Product team leads, and every other technology team member is behind you.
You will bring your projects from initial conception through all the cycles of development from project definition to development, debugging, initial release and subsequent iteration. You will also take part in shaping the architecture of the product, including our deployment infrastructure, to fit the growing needs of the platform.
Key Responsibilities
- This is a backend-heavy Fullstack role.
- Develop front and back-end-related product features for optimal user experience
- Design intuitive user interactions on web pages
- Spin up servers and databases while ensuring stability and scalability of applications
- Work alongside graphic designers to enhance web design features
- Oversee and drive projects from conception to finished product
- Design and develop APIs
- Brainstorm, execute and deliver solutions that meet both technical and consumer needs
- Staying abreast of developments in web applications and programming languages
Desired Skills
- 2-4 years of web application development experience
- Python development and architecture
- Prior work experience of working on Django or Flask framework
- Knowledge of HTML, CSS, JavaScript and React.
- Familiar with agile development environment, continuous integration and continuous deployment
- Familiar with OOP, MVC, and commonly used design patterns
- Knowledge of SQL and relational databases
- Experience of working on one or more AWS services like - AWS EC2, AWS S3, AWS
- Managed Redis, AWS Elastic Search, AWS Managed Airflow, RDS, S3 is preferred but not mandatory
- Comfortable with continuous integration, automated testing, source control, and other DevOps methodologies
About us
Fisdom is one of the largest wealthtech platforms that allows investors to manage their wealth in an intuitive and seamless manner. Fisdom has a suite of products and services that takes care of every wealth requirement that an individual would have. This includes Mutual Funds, Stock Broking, Private Wealth, Tax Filing, and Pension funds
Fisdom has a B2C app and also an award-winning B2B2C distribution model where we have partnered with 15 of the largest banks in India such as Indian Bank and UCO Bank to provide wealth products to their customers. In our bank-led distribution model, our SDKs are integrated seamlessly into the bank’s mobile banking and internet banking application. Fisdom is the first wealthtech company in the country to launch a stock broking product for customers of a PSU bank.
The company is breaking down barriers by enabling access to wealth management to underserved customers. All our partners combined have a combined user base of more than 50 crore customers. This makes us uniquely placed to disrupt the wealthtech space which we believe is in its infancy in India in terms of wider adoption.
Where are we now and where are we heading towards
Founded by veteran VC-turned entrepreneur Subramanya SV(Subu) and former investment
banker Anand Dalmia, Fisdom is backed by PayU (Naspers), Quona Capital, and Saama Capital; with $37million of total funds raised so far. Fisdom is known for its revenue and profitability focussed approach towards sustainable business.
Fisdom is the No.1 company in India in the B2B2C wealthtech space and one of the most admired companies in the fintech ecosystem for our business model. We look forward to growing the leadership position by staying focussed on product and technology innovation.
Our technology team
Today we are a 60-member strong technology team. Everyone in the team is a hands-on engineer, including the team leads and managers. We take pride in being product engineers and we believe engineers are fundamentally problem solvers first. Our culture binds us together as one cohesive unit. We stress on engineering excellence and strive to become a high talent density team. Some values that we preach and practice include:
- Individual ownership and collective responsibility
- Focus on continuous learning and constant improvement in every aspect of engineering and product
- Cheer for openness, inclusivity and transparency
- Merit-based growth
What we are looking for
- Are open to work in a flat, non-hierarchical setup where daily focus is only shipping features not reporting to managers
- Experience designing highly interactive web applications with performance, scalability, accessibility, usability, design, and security in mind.
- Experience with distributed (multi-tiered) systems, algorithms, and relational and no-sql databases.
- Ability to break-down larger/fuzzier problems into smaller ones in the scope of the product
- Experience with architectural trade-offs, applying synchronous and asynchronous design patterns, and delivering with speed while maintaining quality.
- Raise the bar on sustainable engineering by improving best practices, producing best in class of code, documentation, testing and monitoring.
- Contributes in code and actively takes part in code reviews.
- Working with the Product Owner/managers to clearly define the scope of multiple sprints. Lead/guide the team through sprint(s) scoping, resource allocation and commitment - the execution plan.
- Drives feature development end-to-end. Active partner with product, design, and peer engineering leads and managers.
- Familiarity with build, release, deployment tools such as Ant, Maven, and Gradle, Docker, Kubernetes, Jenkins etc.
- Effective at influencing a culture of engineering craftsmanship and excellence
- Helps the team make the right choices. Drives adoption of engineering best practices and development processes within their team.
- Understanding security and compliance.
- User authentication and authorisation between multiple systems, servers, and environments.
- Based on your experience, you may lead a small team of Engineers.
If you don't have all of these, that's ok. But be excited about learning the few you don't know.
Skills
Microservices, Engineering Management, Quality management, Technical Architecture, technical lead. Hands-on programming experience in one of languages: Python, Golang.
Additional perks
- Access to large repositories of online courses through Myacademy (includes Udemy, Coursera, Harvard ManageMentor, Udacity and many more). We strongly encourage learning something outside of work as a habit.
- Career planning support/counseling / coaching support. Both internal and external coaches.
- Relocation policy
You will not be a good fit for this role if
- you have experience of only working with services companies or have spent a major part of your time there
- you are not open to shifting to new programming language or stack but exploring a position aligned to your current technical experience
- you are not very hands-on, seek direction constantly and need continuous supervision from a manager to finish tasks
- you like to working alone and mentoring junior engineers does not interest you
- you are looking to work in very large teams
Why join us and where?
We're a small but high performing engineering team. We recognize that the work we do impacts the lives of hundreds and thousands of people. Your work will contribute significantly to our mission. We pay competitive compensation and performance bonuses. We provide a high energy work environment and you are encouraged to play around new technology and self-learning. You will be based out of Bangalore
About us
Fisdom is one of the largest wealthtech platforms that allows investors to manage their wealth in an intuitive and seamless manner. Fisdom has a suite of products and services that takes care of every wealth requirement that an individual would have. This includes Mutual Funds, Stock Broking, Private Wealth, Tax Filing, and Pension funds
Fisdom has a B2C app and also an award-winning B2B2C distribution model where we have partnered with 15 of the largest banks in India such as Indian Bank and UCO Bank to provide wealth products to their customers. In our bank-led distribution model, our SDKs are integrated seamlessly into the bank’s mobile banking and internet banking application. Fisdom is the first wealthtech company in the country to launch a stock broking product for customers of a PSU bank.
The company is breaking down barriers by enabling access to wealth management to underserved customers. All our partners combined have a combined user base of more than 50 crore customers. This makes us uniquely placed to disrupt the wealthtech space which we believe is in its infancy in India in terms of wider adoption.
Where are we now and where are we heading towards
Founded by veteran VC-turned entrepreneur Subramanya SV(Subu) and former investment
banker Anand Dalmia, Fisdom is backed by PayU (Naspers), Quona Capital, and Saama Capital; with $37million of total funds raised so far. Fisdom is known for its revenue and profitability focussed approach towards sustainable business.
Fisdom is the No.1 company in India in the B2B2C wealthtech space and one of the most admired companies in the fintech ecosystem for our business model. We look forward to growing the leadership position by staying focussed on product and technology innovation.
Our technology team
Today we are a 60-member strong technology team. Everyone in the team is a hands-on engineer, including the team leads and managers. We take pride in being product engineers and we believe engineers are fundamentally problem solvers first. Our culture binds us together as one cohesive unit. We stress on engineering excellence and strive to become a high talent density team. Some values that we preach and practice include:
- Individual ownership and collective responsibility
- Focus on continuous learning and constant improvement in every aspect of engineering and product
- Cheer for openness, inclusivity and transparency.
- Merit-based growth
Key Responsibilities
- Write code, build prototypes and resolve issues.
- Write and review unit test cases.
- Review code & designs for both oneself and team members
- Defining and building microservices
- Building systems with positive business outcome
- Tracking module health, usage & behaviour tracking.
Key Skills
- An engineer with at least 1-3 years of working experience in web services, preferably in Python
- Must have a penchant for good API design.
- Must be a stickler for good, clear and secure coding.
- Must have built and released APIs in production.
- Experience in working with RDBMS & NoSQL databases.
- Working knowledge of GCP, AWS, Azure or any other cloud provider.
- Aggressive problem diagnosis & creative problem solving skills.
- Communication skills, to speak to developers across the world
Why join us and where?
We're a small but high performing engineering team. We recognize that the work we do impacts the lives of hundreds and thousands of people. Your work will contribute significantly to our mission. We pay competitive compensation and performance bonuses. We provide a high energy work environment and you are encouraged to play around new technology and self-learning. You will be based out of Bangalore.
About POSHN
POSHN is a dynamic India-based ag-fintech venture founded in 2020 and supported by leading marquee financial institutions. We are on a mission to organize and digitally transform the global agri-supply chain market. By applying first principles thinking, we are re-imagining solutions with a tech-product-first mindset. The agri-supply chain space is gigantic, complex, and has been largely unorganized. At POSHN, we are taking the hassle out of it by creating a platform that empowers our products and facilitates a better, efficient, and seamless experience between stakeholders.
About the Team
Our core team consists of alumni from BITS Pilani, IIM, and XLRI, each with a decade-long experience in supply chain, technology, and product development. Previously, we have built several highly impactful tech-product startups from the ground up.
About the Role: Full Stack Engineer
We are looking for a talented and motivated Full Stack Developer with 2-4 years of experience to join our team. In this role, you will work closely with our team to define the product roadmap and take ownership of the web development stack. You will have the flexibility to innovate and implement new solutions that enhance our platform, working alongside UX, data engineering, and other key stakeholders.
Key Responsibilities
• Full Stack Development: Design, build, and configure applications to meet business processes and application requirements. You will work across the frontend and backend, ensuring seamless integration and functionality.
• Frontend Engineering: Convert designs into smooth, user-friendly interfaces. Analyze UI/UX designs and suggest the best approaches for implementation.
• Backend Engineering: Build scalable backend systems using microservices architecture. Optimize database models, queries, and ensure high availability and performance.
• Requirements Gathering: Capture and clearly articulate technical requirements by working closely with stakeholders and end customers. Prioritize tasks to meet business objectives.
• Code Quality: Write testable, scalable, and efficient code. Lead code reviews and ensure that software quality standards are met.
• Solution Implementation: Take ownership of assigned products, ensuring timely and efficient delivery. Carry out solution testing, including root-cause analysis, and suggest logical alternatives when needed.
Career Experience
• Experience: 2-4 years of experience in full stack development, including significant contributions to at least three web development projects.
• Technical Expertise: Proficient in frontend frameworks (VueJS, ReactJS, or AngularJS) and backend technologies (NodeJS, Go, or Python).
• Microservices Architecture: Experience building scalable backend microservices architecture.
• Database Management: Expertise in SQL, NoSQL, and database optimization.
• Agile Methodologies: Familiar with agile development practices, including coding standards, documentation, unit testing, and version control using Git.
Bonus Points/Nice to Have
• E-commerce/Supply Chain Experience: Any experience or knowledge in the e-commerce or supply chain industry.
About You
• Creative and Innovative: You enjoy designing and building complex software systems and are not afraid to experiment and try new approaches.
• Ownership: You take responsibility for your work and are committed to delivering high-quality solutions.
• Continuous Learner: You are curious and eager to stay updated with the latest technologies and industry trends.
• Analytical Mind: You have a keen eye for detail and are capable of turning ideas into working applications that deliver real value to customers.
Benefits
• Competitive Salary and ESOPs: Rewarding you for your best work.
• Generous Leave: We believe in work-life balance.
• Exciting Startup Environment: Be a key member of a startup transforming the agri-supply chain industry.
• Collaborative Culture: Work in an open, fun, and supportive environment.
• Training & Development: Grow in the areas that interest you most.
How to Apply
If you are interested for this role, kindly apply on the same. This JD merges the best elements from both descriptions, tailored for a Full Stack Developer role with 2-4 years of experience. Let me know if you need any further adjustments!
About the Role
We are actively seeking talented Senior Python Developers to join our ambitious team dedicated to pushing the frontiers of AI technology. This opportunity is tailored for professionals who thrive on developing innovative solutions and who aspire to be at the forefront of AI advancements. You will work with different companies in the US who are looking to develop both commercial and research AI solutions.
Required Skills:
- Write effective Python code to tackle complex issues
- Use business sense and analytical abilities to glean valuable insights from public databases
- Clearly express the reasoning and logic when writing code in Jupyter notebooks or other suitable mediums
- Extensive experience working with Python
- Proficiency with the language's syntax and conventions
- Previous experience tackling algorithmic problems
- Nice to have some prior Software Quality Assurance and Test Planning experience
- Excellent spoken and written English communication skills
The ideal candidates should be able to
- Clearly explain their strategies for problem-solving.
- Design practical solutions in code.
- Develop test cases to validate their solutions.
- Debug and refine their solutions for improvement.
5 years in software development (Minimum 3 years)
Strong expertise in Ruby on Rails (3-5 years)
Knowledge of Python is a plus
hashtag
#Key hashtag
#Skills:
Proficiency in scalable app techniques: caching, APM, microservices architecture
Ability to write high-quality code independently
Experience mentoring junior engineers (0-2 years of experience)
What We Offer:
An opportunity to work with a dynamic team
A challenging environment where your skills will be put to the test
A chance to make a real impact by guiding and mentoring others
Ready to make your mark? If you're based out of or willing to relocate to Gurgaon and have the experience we're looking for, apply now!
About Jeeva.ai
At Jeeva.ai, we're on a mission to revolutionize the future of work by building AI employees that automate all manual tasks—starting with AI Sales Reps. Our vision is simple: "Anything that doesn’t require deep human connection can be automated & done better, faster & cheaper with AI." We’ve created a fully automated SDR using AI that generates 3x more pipeline than traditional sales teams at a fraction of the cost.
As a dynamic startup we are backed by Alt Capital (founded by Jack Altman & Sam Altman), Marc Benioff (CEO Salesforce), Gokul (Board Coinbase), Bonfire (investors in ChowNow), Techtsars (investors in Uber), Sapphire (investors in LinkedIn), Microsoft with $1M ARR in just 3 months after launch, we’re not just growing - we’re thriving and making a significant impact in the world of artificial intelligence.
As we continue to scale, we're looking for mid-senior Full Stack Engineers who are passionate, ambitious, and eager to make an impact in the AI-driven future of work.
About You
- Experience: 3+ years of experience as a Full Stack Engineer with a strong background in React, Python, MongoDB, and AWS.
- Automated CI/CD: Experienced in implementing and managing automated CI/CD pipelines using GitHub Actions and AWS Cloudformation.
- System Architecture: Skilled in architecting scalable solutions for systems at scale, leveraging caching strategies, messaging queues and async/await paradigms for highly performant systems
- Cloud-Native Expertise: Proficient in deploying cloud-native apps using AWS (Lambda, API Gateway, S3, ECS), with a focus on serverless architectures to reduce overhead and boost agility..
- Development Tooling: Proficient in a wide range of development tools such as FastAPI, React State Management, REST APIs, Websockets and robust version control using Git.
- AI and GPTs: Competent in applying AI technologies, particularly in using GPT models for natural language processing, automation and creating intelligent systems.
- Impact-Driven: You've built and shipped products that users love and have seen the impact of your work at scale.
- Ownership: You take pride in owning projects from start to finish and are comfortable wearing multiple hats to get the job done.
- Curious Learner: You stay ahead of the curve, eager to explore and implement the latest technologies, particularly in AI.
- Collaborative Spirit: You thrive in a team environment and can work effectively with both technical and non-technical stakeholders.
- Ambitious: You have a hunger for success and are eager to contribute to a fast-growing company with big goals.
What You’ll Be Doing
- Build and Innovate: Develop and scale AI-driven products like Gigi (AI Outbound SDR), Jim (AI Inbound SDR), Automate across voice & video with AI.
- Collaborate Across Teams: Work closely with our Product, GTM, and Engineering teams to deliver world-class AI solutions that drive massive value for our customers.
- Integrate and Optimize: Create seamless integrations with popular platforms like Salesforce, LinkedIn, and HubSpot, enhancing our AI’s capabilities.
- Problem Solving: Tackle challenging problems head-on, from data pipelines to user experience, ensuring that every solution is both functional and delightful.
- Drive AI Adoption: Be a key player in transforming how businesses operate by automating workflows, lead generation, and more with AI.
About us
CallHub provides cloud based communication software for nonprofits, political parties, advocacy organizations and businesses. We have delivered over 200 millions messages and calls for thousands of customers. We help political candidates during their campaigns in getting their message across to their voters, conduct surveys, manage event/town-hall invites and with recruiting volunteers for election campaigns. We are profitable with 8000+ paying customers from North America, Australia and Europe. Our customers include Uber, the Democratic Party, major political parties in the US, Canada, UK, France and Australia.
About the Role
As a DevOps Engineer, you will play a crucial role in architecting, implementing, and managing the infrastructure and automation that powers our scalable and reliable cloud-based applications. Your expertise in Python, AWS, and various cloud services will be pivotal in driving our CI/CD pipelines, ensuring optimal performance, security, and availability of our services. Within the DevOps team, you will work closely with highly technical software engineers, quality engineers, support engineers, and product managers to build and maintain robust infrastructure solutions that support our rapidly growing customer base. We're looking for engineers with strong computer science fundamentals who are passionate, inquisitive and eager to learn new technologies and love working in a dynamic and fast-paced environment, contributing to our mission of delivering exceptional product experiences. Your
Responsibilities
- Architect, implement, and manage scalable and secure infrastructure on AWS using services such as RDS, EC2, ELB, ASG, CloudWatch, and Lambda.
- Automate deployment pipelines (CI/CD) to ensure seamless and reliable delivery of software to production.
- Implement and maintain monitoring and alerting systems using CloudWatch and other tools to ensure system reliability and performance.
- Optimize the performance and security of our applications using Cloudflare, Nginx, Redis, and CDNs.
- Manage and optimize databases, including PostgreSQL, with a focus on indexing, query tuning, and performance optimization.
- Act as a Site Reliability Engineer (SRE), being part of the on-call rotation to respond to and resolve critical incidents, ensuring high availability and minimal downtime.
- Develop and implement strategies for incident management, root cause analysis, and post-incident reviews to continuously improve system reliability.
- Collaborate with development teams to integrate DevOps best practices into the lifecycle of applications built with Python, Django, and Celery.
- Deploy and manage message brokers and streaming platforms like RabbitMQ and Kafka.
- Configure and manage proxy servers, reverse proxies, and load balancers to ensure optimal traffic management and security.
- Troubleshoot and resolve infrastructure-related issues promptly.
- Document processes, configurations, and best practices to ensure knowledge sharing and smooth operation.
- Contribute to the continuous improvement of our DevOps practices and toolsets.
- Communicate well with product and relevant stakeholders.
What we’re looking for
- 1-3 years of experience in a DevOps or similar role, with a strong focus on Python scripting, AWS services, and infrastructure management experience.
- Hands-on experience with AWS services such as EC2, RDS, ELB, ASG, CloudWatch, and Lambda.
- Strong knowledge of CI/CD tools and practices, including automation using Jenkins, GitLab CI, or similar tools.
- Experience with infrastructure as code (IaC) tools like Terraform, Pulumi, or CloudFormation.
- Experience with Python-based frameworks like Django and task queues like Celery.
- Proficiency in managing web servers (Nginx), caching solutions (Redis, Memcache), and CDNs.
- Experience with relational databases (PostgreSQL) and messaging systems like RabbitMQ and Kafka.
- Knowledge of database indexing, query tuning, and performance optimization techniques.
- Solid understanding of proxy servers, reverse proxies, and load balancers.
- Ability to troubleshoot complex issues across multiple layers of the stack.
- Strong communication skills, with the ability to work effectively in a collaborative team environment.
- Passionate about learning new technologies and improving existing processes.
- BE/MS in Computer Science or a related field, or equivalent practical experience.
What you can look forward to
- The opportunity to work on cutting-edge cloud technologies and contribute to mission-critical infrastructure.
- A role that allows you to take ownership of significant aspects of our infrastructure and automation.
- A collaborative and open culture where your ideas are valued, and you are encouraged to take initiative and aspire to be great in your role.
- A dynamic work environment where your contributions directly impact the success and reliability of our services. You will get to see your work directly impacting in a significant way.
- Exposure to the full lifecycle of software development and deployment, from design to monitoring and optimization.
Job Overview:
We are seeking a motivated and enthusiastic Junior AI/ML Engineer to join our dynamic team. The ideal candidate will have a foundational knowledge in machine learning, deep learning, and related technologies, with hands-on experience in developing ML models from scratch. You will work closely with senior engineers and data scientists to design, implement, and optimize AI solutions that drive innovation and improve our products and services.
Key Responsibilities:
- Develop and implement machine learning and deep learning models from scratch for various applications.
- Collaborate with cross-functional teams to understand requirements and provide AI-driven solutions.
- Utilize deep learning frameworks such as TensorFlow, PyTorch, Keras, and JAX for model development and experimentation.
- Employ data manipulation and analysis tools such as pandas, scikit-learn, and statsmodels to preprocess and analyze data.
- Apply visualization tools like matplotlib and spacy to present data insights and model performance.
- Demonstrate a general understanding of data structures, algorithms, multi-threaded programming, and distributed computing concepts.
- Leverage knowledge of statistical and algorithmic models along with fundamental mathematical concepts, including linear algebra and probability.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field.
- Solid foundation in machine learning, deep learning, computer vision, and natural language processing (NLP).
- Proven experience in developing ML/deep learning models from scratch.
- Proficiency in Python and relevant libraries.
- Hands-on experience with deep learning frameworks such as TensorFlow, PyTorch, Keras, or JAX.
- Experience with data manipulation and analysis libraries like pandas, scikit-learn, and visualization tools like matplotlib.
- Strong understanding of data structures, algorithms, and multi-threaded programming.
- Knowledge of statistical models and fundamental mathematical concepts, including linear algebra and probability.
Skills and Competencies:
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities.
- Ability to work independently and as part of a team in a fast-paced environment.
- Eagerness to learn and stay updated with the latest advancements in AI/ML technologies.
Preferred Qualifications:
- Previous internship or project experience in AI/ML.
- Familiarity with cloud-based AI/ML services and tools.
Nirwana.AI is an equal opportunity employer and welcomes applicants from all backgrounds to apply.
Who We Are 🌟
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT’. Embracing Software Craftsmanship values and eXtreme Programming Practices, we create well-crafted products for our clients. We partner with large organizations to help modernize their legacy code bases and work with startups to launch MVPs, scale or as extensions of their team to efficiently operationalize their ideas. We love to work with folks who are passionate about creating exceptional software, are continuous learners, and are painstakingly fussy about quality. 🚀
Our Values 💡
- Relentless Pursuit of Quality with Pragmatism
- Extreme Ownership
- Proactive Collaboration
- Active Pursuit of Mastery
- Effective Feedback
- Client Success
What We’re Looking For 👀
We’re on the hunt for Software Craftspeople who take pride in their work and the code they write. If you believe in and evangelize eXtreme Programming principles, and if you are motivated and passionate about forming great teams, we want you! We strongly adhere to being a DevOps organization, where developers own the entire release cycle. This means you will get to work on programming languages, infrastructure technologies in the cloud, client communication and everything in the middle. Please read on if what you have read so far resonates with you! 🌥️
What You’ll Be Doing 💻
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement 📚
Skills You Need to Succeed 🌟
Most Important:
Integrity, a strong character, diligence, and commitment to excellence.
Must-Have:
- Technical Skills: Proficiency in Python, Test Driven Development and TypeScript.
- Self-Learner: Hands-on and obsessive about best practices in software development.
- Sense of Ownership: Commitment to implementing the highest quality solutions.
- Programming Expertise: Strong skills in object-oriented programming, data structures, algorithms, and software engineering methodologies.
- Web Architecture: Ability to design and develop web architecture and optimize existing infrastructure.
- Agile and XP Methodologies: Experience working in Agile and eXtreme Programming methodologies within a continuous deployment environment.
- Technologies: Interest in mastering technologies like web server ecosystems, relational DBMS, TDD, CI/CD tools.
- Server Configuration: Knowledge of server configuration and deployment infrastructure.
- Tools and Documentation: Experience using source control, bug tracking systems, writing user stories, and technical documentation. 📄
Employee Benefits 🎁
- Medical Insurance: Comprehensive coverage for you and your family 🏥
- Term Insurance: Financial security for your loved ones 💼
- Employee Friendly Leave Policy: Take the time you need to recharge and maintain work-life balance 🏖️
- Dedicated Learning and Development Budget: Grow your skills with a budget allocated specifically for learning 📚
- Flexible Working Hours: Customize your work schedule to fit your life ⏰
- Work from Home/Remote: Enjoy the flexibility of working from anywhere 🏡
- And Many More!: Additional perks designed to support your well-being and professional
- growth 🌟
SOFTWARE DEVELOPER
ABOUT US
Datacultr is a global Digital Operating System for Risk Management and Credit Recovery, we drive Collection Efficiencies, and Reduces Delinquencies and Non-Performing Loans (NPLs). Datacultr is a Digital-Only provider of Consumer Engagement, Recovery, and Collection Solutions, helping Consumer Lending, Retail, Telecom, and Fintech Organizations to expand and grow their business in the under-penetrated New to Credit and Thin File Segments. Datacultr’s platforms, make the underserved and unbanked segment viable, for providers of Consumer Durable Loans, Buy Now Pay Later, Micro-Loans, Nano-Loans, and other Unsecured Loans.
We are helping millions of new to-credit consumers, across emerging markets, access formal credit and begin their journey towards financial health. We have clients across India, South Asia, South East Asia, Africa and LATAM.
Datacultr is headquartered in Dubai, with offices in Abu Dhabi, Singapore, Ho Chi Minh City, Nairobi, and Mexico City; with our Development Center based out of Gurugram, India.
ORGANIZATION’S GROWTH PLAN
Datacultr’s vision is to enable convenient financing opportunities for consumers, entrepreneurs, and small merchants, helping them combat the Socio-economic problems this segment faces due to restricted access to financing.
We are on a mission to enable 30 million unbanked & under-served people, to access financial services by 2025.
JOB DESCRIPTION
POSITION – Software Developer – L1/L2
ROLE – Individual Contributor
FUNCTION – Engineering
WORK LOCATION – Gurugram
WORK MODEL – Work from the Office only
QUALIFICATION – B.Tech /M.Tech /B.C.A. /M.C.A.
SALARY PACKAGE – Negotiable based on skillset & experience
NOTICE PERIOD – Can join at the earliest
EXPECTATION
We are seeking a highly skilled and experienced Software Engineer with a minimum of 2 years of professional experience in Python and Django, specifically in building REST APIs using frameworks like FASTAPI and Django Rest Framework (DRF). The ideal candidate should have hands-on experience with Redis cache, Docker, containerization tools, and PostgreSQL.
KEY RESPONSIBILITIES
1. Collaborate with cross-functional teams to design, develop, and maintain high-quality software solutions using Python, Django (including Django REST Framework), FastAPI, and other relevant frameworks.
2. Build robust and scalable REST APIs, ensuring efficient data transfer and seamless integration with frontend and third-party systems.
3. Utilize Redis for caching, session management, and performance optimization, and implement other caching strategies as needed.
4. Containerize applications using Docker for easy deployment and scalability.
5. Design and implement database schemas using PostgreSQL, ensuring data integrity and performance.
6. Write clean, efficient, and well-documented code following best practices and coding standards.
7. Participate in system design discussions and contribute to architectural decisions.
8. Troubleshoot and debug complex software issues, ensuring smooth operation of the application.
9. Profile and optimize Python code for improved performance and scalability.
10. Implement and maintain CI/CD pipelines for automated testing and deployment.
KEY REQUIREMENTS
· 2+ years of experience in Python backend development.
· Strong proficiency in Python, Django, and RESTful API development.
· Experience with FastAPI, asyncio, and other modern Python libraries and frameworks.
· Solid understanding of database technologies, particularly PostgreSQL.
· Proficiency in using Redis for caching and performance optimization.
· Experience with Docker containerization and orchestration.
· Knowledge of cloud platforms (AWS) and experience with related services (e.g., EC2, S3, RDS).
· Familiarity with message brokers like RabbitMQ or Kafka.
· Experience with Test-Driven Development (TDD) and automated testing frameworks.
· Proficiency in version control systems, particularly Git.
· Strong problem-solving skills and attention to detail.
· Excellent communication skills and ability to work effectively in a collaborative environment.
· Experience with Agile development methodologies.
PERKS & BENEFITS
v Professional Development through Learning & Up-skilling.
v Flexible working hours
v Medical benefits
v Exciting work culture
Job Description: Python Backend Developer
Experience: 7-12 years
Job Type: Full-time
Job Overview:
Wissen Technology is looking for a highly experienced Python Backend Developer with 7-12 years of experience to join our team. The ideal candidate will have deep expertise in backend development using Python, with a strong focus on Django and Flask frameworks.
Key Responsibilities:
- Develop and maintain robust backend services and APIs using Python, Django, and Flask.
- Design scalable and efficient database schemas, integrating with both relational and NoSQL databases.
- Collaborate with front-end developers and other team members to establish objectives and design functional, cohesive code.
- Optimize applications for maximum speed and scalability.
- Ensure security and data protection protocols are implemented effectively.
- Troubleshoot and debug applications to ensure a seamless user experience.
- Participate in code reviews, testing, and quality assurance processes.
Required Skills:
Python: Extensive experience in backend development using Python.
Django & Flask: Proficiency in Django and Flask frameworks.
Database Management: Strong knowledge of databases such as PostgreSQL, MySQL, and MongoDB.
API Development: Expertise in building and maintaining RESTful APIs.
Security: Understanding of security best practices and data protection measures.
Version Control: Experience with Git for collaboration and version control.
Problem-Solving: Strong analytical skills with a focus on writing clean, efficient code.
Communication: Excellent communication and teamwork skills.
Preferred Qualifications:
- Experience with cloud services like AWS, Azure, or GCP.
- Familiarity with Docker and containerization.
- Knowledge of CI/CD practices.
Why Join Wissen Technology?
- Opportunity to work on innovative projects with a cutting-edge technology stack.
- Competitive compensation and benefits package.
- A supportive environment that fosters professional growth and learning.
PlanetSpark is Hiring !!
Title of the Job : Data Analyst ( FULL TIME)
Location : Gurgaon
Roles and Responsibilities/Mission Statement:
We are seeking an experienced Data Analyst to join our dynamic team. The ideal candidate will possess a strong analytical mindset, excellent problem-solving skills, and a passion for uncovering actionable insights from data. As a Data Analyst, you will be responsible for collecting, processing, and analyzing large datasets to help inform business decisions and strategies and will be the source of company wide intelligence
The responsibilities would include :
1) Creating a robust Sales MIS
2) Tracking key metrics of the company
3) Reporting key metrics on a daily basis
4) Sales incentive and teacher payout calculation
5) Tracking and analyzing large volume of consumer data related to customers and teachers
6) Developing intelligence from data from various sources
Ideal Candidate Profile -
- 1-4 years of experience in a data-intensive position at a consumer business or a Big 4 firm
- Excellent ability in advanced excel
- Knowledge of other data analytics tools and software such as SQL, Python, R, Excel, and data visualization tools will be good to have.
- Detail-oriented with strong organizational skills and the ability to manage multiple projects simultaneously.
- Exceptional analytical ability
- Detail-oriented with strong organizational skills and the ability to manage multiple projects simultaneously.
Eligibility Criteria:
- Willing to work 5 Days a week from office and Saturday - work from home
- Willing to work in an early-stage startup .
- Must have 1-3 years of prior experience in a data focused role at a consumer internet or a Big 4
- Must have excellent analytical abilities
- Available to Relocate to Gurgaon
- Candidate Should his own laptop
- Gurgaon based candidate will be given more preference
Join us and leverage your analytical expertise to drive data-driven decisions and contribute to our success. Apply today!
Who are we looking for?
We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.
Job Summary
- Supporting company mission by understanding complex business problems through data-driven solutions.
- Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ...
- Developing end-to-end ML production-ready solutions and visualizations.
- Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards.
- Communicating complex technical concepts and findings to non-technical stakeholders of the projects
- Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms.
- Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings.
- Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models.
Qualification and experience
- B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields.
- 5+ years of professional experience in the field of machine learning, and data science.
- Experience with large-scale Time-series data-based production code development is a plus.
Skills and competencies
- Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must.
- Ability to work on multiple projects. Must have strong design and implementation skills.
- Ability to conduct research based on complex business problems.
- Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
- Must have programming experience in Python.
- Excellent English communication skills, both written and verbal.
Benefits and Perks
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
Hiring Process
- Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements.
- First Round: Technical round 1 to gauge your domain knowledge and functional expertise.
- Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
- Final HR Round: Culture fit round and compensation discussions.
- Offer: Congratulations you made it!
If this position sparked your interest, apply now to initiate the screening process.
About Davis Index
Davis Index is a market intelligence platform and publication that provides price benchmarks for recycled materials and primary metals.
Our team of dedicated reporters, analysts, and data specialists publish and process over 1,400 proprietary price indexes, metals futures prices, and other reference data including market intelligence, news, and analysis through an industry-leading technology platform.
About the role
Here at Davis Index, we look to bring true, accurate market insights, news and data to the recycling industry. This enables sellers and buyers to boost their margins, and access daily market intelligence, data analytics, and news.
We’re looking for a keen data expert who will take on a high-impact role that focuses on end-to-end data management, BI and analysis tasks within a specific functional area or data type. If taking on challenges in building, extracting, refining and very importantly automating data processes is something you enjoy doing, apply to us now!
Key Role
Data visualization - Power BI, Tableau,Python
DB Management - SQL, MangoDB,
Data collection, Cleaning, Modelling , Analysis
Programming Languages and Tools: Python, R, VBA, Appscript, Excel, Google sheets
What you will do in this role
- Build and maintain data pipelines from internal databases.
- Data mapping of data elements between source and target systems.
- Create data documentation including mappings and quality thresholds.
- Build and maintain analytical SQL/MongoDB queries, scripts.
- Build and maintain Python scripts for data analysis/cleaning/structuring.
- Build and maintain visualizations; delivering voluminous information in comprehensible forms or in ways that make it simple to recognise patterns, trends, and correlations.
- Identify and develop data quality initiatives and opportunities for automation.
- Investigate, track, and report data issues.
- Utilize various data workflow management and analysis tools.
- Ability and desire to learn new processes, tools, and technologies.
- Understanding fundamental AI and ML concepts.
Must have experience and qualifications
- Bachelor's degree in Computer Science, Engineering, or Data related field required.
- 2+ years’ experience in data management.
- Advanced proficiency with Microsoft Excel and VBA/ Google sheets and AppScript
- Proficiency with MongoDB/SQL.
- Familiarity with Python for data manipulation and process automation preferred.
- Proficiency with various data types and formats including, but not limited to JSON.
- Intermediate proficiency with HTML/CSS.
- Data-driven strategic planning
- Strong background in data analysis, data reporting, and data management coupled with the adept process mapping and improvements.
- Strong research skills.
- Attention to detail.
What you can expect
Work closely with a global team helping bring market intelligence to the recycling world. As a part of the Davis Index team we look to foster relationships and help you grow with us. You can also expect:
- Work with leading minds from the recycling industry and be part of a growing, energetic global team
- Exposure to developments and tools within your field ensures evolution in your career and skill building along with competitive compensation.
- Health insurance coverage, paid vacation days and flexible work hours helping you maintain a work-life balance
- Have the opportunity to network and collaborate in a diverse community
Apply Directly using this link : https://nyteco.keka.com/careers/jobdetails/54122
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
- Architectural Leadership:
- Design and architect robust, scalable, and high-performance Hadoop solutions.
- Define and implement data architecture strategies, standards, and processes.
- Collaborate with senior leadership to align data strategies with business goals.
- Technical Expertise:
- Develop and maintain complex data processing systems using Hadoop and its ecosystem (HDFS, YARN, MapReduce, Hive, HBase, Pig, etc.).
- Ensure optimal performance and scalability of Hadoop clusters.
- Oversee the integration of Hadoop solutions with existing data systems and third-party applications.
- Strategic Planning:
- Develop long-term plans for data architecture, considering emerging technologies and future trends.
- Evaluate and recommend new technologies and tools to enhance the Hadoop ecosystem.
- Lead the adoption of big data best practices and methodologies.
- Team Leadership and Collaboration:
- Mentor and guide data engineers and developers, fostering a culture of continuous improvement.
- Work closely with data scientists, analysts, and other stakeholders to understand requirements and deliver high-quality solutions.
- Ensure effective communication and collaboration across all teams involved in data projects.
- Project Management:
- Lead large-scale data projects from inception to completion, ensuring timely delivery and high quality.
- Manage project resources, budgets, and timelines effectively.
- Monitor project progress and address any issues or risks promptly.
- Data Governance and Security:
- Implement robust data governance policies and procedures to ensure data quality and compliance.
- Ensure data security and privacy by implementing appropriate measures and controls.
- Conduct regular audits and reviews of data systems to ensure compliance with industry standards and regulations.
Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Position: Generative AI Lead
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 10-12 years
CTC: 33LPA
Employment Type: Full Time
Key Responsibilities:
- Utilize advanced machine learning techniques to develop and train generative AI models.
- Collaborate with cross-functional teams to groom and preprocess large datasets for model training.
- Research and implement cutting-edge algorithms and architectures for generative AI applications.
- Optimize model performance and scalability for real-time inference and deployment.
- Stay current with industry trends and advancements in generative AI technology to drive innovation.
- Experiment with different hyperparameters and model configurations to improve generative AI model quality.
- Establish scalable, efficient, automated processes for data analyses, model development, validation and implementation,
- Choose suitable DL algorithms, software, hardware and suggest integration methods.
- Ensure AI ML solutions are developed, and validations are performed in accordance with Responsible AI guidelines & Standards
- To closely monitor the Model Performance and ensure Model Improvements are done post Project Delivery
- Provide technical expertise and guidance to support the integration of generative AI solutions into various products and services.
- Coach and mentor our team as we build scalable machine learning solutions
- Strong communication skills and an easy-going attitude
- Oversee development and implementation of assigned programs and guide teammates
- Carry out testing procedures to ensure systems are running smoothly
- Ensure that systems satisfy quality standards and procedures
- Build and manage strong relationships with stakeholders and various teams internally and externally,
- Provide direction and structure to assigned projects activities, establishing clear, precise goals, objectives and timeframes, run Project Governance calls with senior Stakeholders
Skills and Qualifications:
- Strong understanding of machine learning and deep learning principles and algorithms.
- Experience in developing and implementing generative AI models and algorithms.
- Proficiency in programming languages such as Python, TensorFlow, and PyTorch.
- Ability to work with large datasets and knowledge of data preprocessing techniques.
- Familiarity with natural language processing (NLP) and computer vision for generative AI applications.
- Experience in building and deploying generative AI systems in real-world applications.
- Strong problem-solving and critical thinking skills for complex AI problems.
- Excellent communication and teamwork abilities to collaborate with cross-functional teams.
- Proven track record of delivering innovative solutions using generative AI technologies.
- Ability to stay updated with the latest advancements in generative AI and adapt to new techniques and methodologies.
- Application Development- Framework , Langchain, LlamaIndex , Vector DB , Pinecone/Chroma/Weaviate
- Fine Tuning Models- Compute Service, Azure/Google Cloud/Lambda , Data Service, Scale/Labelbox, Hosting Services , Azure/AWS , ML Frameworks , Tensorflow,Pytorch
- Model Hubs- Hugging Face , Databricks ,
- Foundation Models- Open-Source , Mistral,Llama , Proprietary , GPT-4
- Compute Hardware- Specialized hardware for model training and inference, Specialized hardware for model training and inference
- Programming Proficiency -
- Advanced Python Skills - Generative AI Experts should have a deep understanding of Python, including its data structures, OOP’s concepts, and libraries such as NumPy and Pandas. They must be able to write clean, efficient, and maintainable code to implement complex AI algorithms.
- TensorFlow and Keras Expertise - TensorFlow and Keras are widely used in the AI community for building neural networks and deep learning models. Generative AI Experts should have a thorough understanding of these libraries, including how to design neural network architectures, customize loss functions, and optimize models for performance
- Debugging and Optimization - Solving complicated problems is a common part of developing generative AI models. Experts must be adept in debugging methods, such as logging and profiling data to find and address problems quickly. They should also know how to optimize code for memory efficiency and performance, which will help the models manage large-scale datasets
- Effective Data Management - One of the most frequent tasks in AI development is managing big datasets. Experts in generative AI should be adept at manipulating data with tools like Pandas and NumPy. To guarantee that the data they use for their models is of the highest caliber, they need also know how to efficiently preprocess and clean data.
- Version Control and Collaboration - Git and other version control systems are crucial for tracking code changes and fostering developer collaboration in a team environment. To enable smooth cooperation on AI projects, generative AI Experts should be familiar with Git workflows, branching techniques, and handling merge conflicts
17. Deep Learning Expertise - Neural Networks , Convolutional Neural Network , Recurrent Neural Network
18. Knowledge of Generative Models - Transformers and Attention networks , Generative adversarial network
19.Generative AI Basics and Advanced Concepts
- Prompt Engineering - Crafting high-quality prompts is crucial for guiding generative models. Experts should excel in designing prompts that steer the model’s creativity and coherence. They must understand how to fine-tune prompts for tasks like text, image, and music generation.
- Attention Mechanisms - Grasping attention mechanisms in models like Transformers, vital for capturing dependencies and context in generative tasks.
- Application Development Approaches -Familiarity with integrating generative models into applications is essential. This includes deploying models in mobile apps, web applications, or as APIs. Experts should consider factors such as model size, latency, and scalability during deployment.
- Fine-Tuning - Mastery of techniques like fine-tuning language models (e.g., GPT-3) for specific tasks. This involves adjusting model parameters and prompts to generate contextually relevant and accurate outputs.
- RAG (Retrieval-Augmented Generation) - Understanding RAG, a framework that combines generative models with retrieval mechanisms. Experts can use RAG to improve model responses by retrieving relevant information from a large dataset. Proficiency in chaining multiple generative models together to create more complex and diverse outputs. This involves connecting models in a sequence to generate outputs that build upon each other.
- Multimodal Generation - Ability to generate outputs across multiple modalities (e.g., text and images), requiring integration of different generative models.
Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Role Overview:
- Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
- Implementing automated testing platforms, unit tests, and CICD Pipeline
- Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
- Understanding of container platform, such as Docker
Job Description
- We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
- Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
- You will be responsible for prompting and building use case Pipelines
- Perform the Evaluation of all the Gen-AI features and Usecase pipeline
Position: AI ML Engineer
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 4-6 years
CTC: 16.5 - 17 LPA
Employment Type: Full Time
Key Responsibilities:
- Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
- Design and develop prompts suiting project needs
- Lead and manage team of prompt engineers
- Stakeholder management across business and domains as required for the projects
- Evaluating base models and benchmarking performance
- Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
- Develop, deploy and maintain auto prompt solutions
- Design and implement minimum design standards for every use case involving prompt engineering
Skills and Qualifications
- Strong proficiency with Python, DJANGO framework and REGEX
- Good understanding of Machine learning framework Pytorch and Tensorflow
- Knowledge of Generative AI and RAG Pipeline
- Good in microservice design pattern and developing scalable application.
- Ability to build and consume REST API
- Fine tune and perform code optimization for better performance.
- Strong understanding on OOP and design thinking
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Good understanding of server-side templating languages
- Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
- Integration of APIs, multiple data sources and databases into one system
- Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
- Understanding fundamental design principles behind a scalable and distributed application
- Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
- Experience in deploying on Cloud platform like Azure or AWS
- Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform
Zethic Technologies is one of the leading creative tech studio based in Bangalore. Zethic’s team members have years of experience in software development. Zethic specializes in Custom software development, Mobile Applications development, chatbot development, web application development, UI/UX designing, and consulting.
Your Responsibilities:
- Coordinating with the software development team in addressing technical doubts
- Reviewing ongoing operations and rectifying any issues
- Work closely with the developers to determine and implement appropriate design and code changes, and make relevant recommendations to the team
- Very good leadership skills with the ability to lead multiple development teams
- Ability to learn new technologies rapidly and share knowledge with other team members.
- Provide technical leadership to programmers working on the development project team.
- Must have knowledge of stages in SDLC
- Should be informed on designing the overall architecture of the web application. Should have experience working with graphic designers and converting designs to visual elements.
- Highly experienced with back-end programming languages (PHP, Python, JavaScript). Proficient experience using advanced JavaScript libraries and frameworks such as ReactJS.
- Development experience for both mobile and desktop. Knowledge of code versioning tools (GIT)
- Mentors junior web developers on technical issues and modern web development best practices and solutions
- Developing reusable code for continued use
Why join us?
We’re multiplying and the sky’s the limit
Work with a talented team you’ll learn a lot from them
We care about delivering value to our excellent customers
We are flexible in our opinions and always open to new ideas
We know it takes people with different ideas, strengths, backgrounds, cultures, beliefs, and interests to make our Company succeed.
We celebrate and respect all our employees equally.
Zethic ensures equal employment opportunity without discrimination or harassment based on race, color, religion, sex, gender identity, age, disability, national origin, marital status, genetic information, veteran status, or any other characteristic protected by law.
Company Description
CorpCare is India’s first all-in-one corporate funds and assets management platform based in Mumbai. We offer a single window solution for corporates, family offices, and HNIs to formulate and manage treasury management policies. Our portfolio management system provide assistance in conducting reviews with investment committees and the board.
Role Description
- Role- Python Developer
- CTC- Upto 12 LPA
This is a full-time on-site role for a Python Developer located in Mumbai. The Python Developer will be responsible for back-end web development, software development, object-oriented programming (OOP), programming, and databases. The Python Developer will also be responsible for performing system analysis and creating robust and scalable software solutions.
Qualifications
- 2+ years of work experience with Python (Programming Language)
- Expertise in Back-End Web Development • Proficiency in Software Development specially in Django framework, Fast API, Rest APIs, AWS
- Experience in Programming and Databases
- Understanding of Agile development methodologies
- Excellent problem-solving and analytical skills
- Ability to work in a team environment
- Bachelor's or Master's degree in Computer Science or relevant field
- Relevant certifications in Python and related frameworks are preferred
- Engage with client business team managers and leaders independently to understand their requirements, help them structure their needs into data needs, prepare functional and technical specifications for execution and ensure delivery from the data team. This can be combination of ETL Processes, Reporting Tools, Analytics tools like SAS, R and alike.
- Lead and manage the Business Analytics team, ensuring effective execution of projects and initiatives.
- Develop and implement analytics strategies to support business objectives and drive data-driven decision-making.
- Analyze complex data sets to provide actionable insights that improve business performance.
- Collaborate with other departments to identify opportunities for process improvements and implement data-driven solutions.
- Oversee the development, maintenance, and enhancement of dashboards, reports, and analytical tools.
- Stay updated with the latest industry trends and technologies in analytics and data
- science.
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
at Cargill Business Services
Job Purpose and Impact:
The Sr. Generative AI Engineer will architect, design and develop new and existing GenAI solutions for the organization. As a Generative AI Engineer, you will be responsible for developing and implementing products using cutting-edge generative AI and RAG to solve complex problems and drive innovation across our organization. You will work closely with data scientists, software engineers, and product managers to design, build, and deploy AI-powered solutions that enhance our products and services in Cargill. You will bring order to ambiguous scenarios and apply in depth and broad knowledge of architectural, engineering and security practices to ensure your solutions are scalable, resilient and robust and will share knowledge on modern practices and technologies to the shared engineering community.
Key Accountabilities:
• Apply software and AI engineering patterns and principles to design, develop, test, integrate, maintain and troubleshoot complex and varied Generative AI software solutions and incorporate security practices in newly developed and maintained applications.
• Collaborate with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals.
• Conduct research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services, optimizing existing generative AI models and RAG for improved performance, scalability, and efficiency, developing and maintaining pipelines and RAG solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning.
• Develop clear and concise documentation, including technical specifications, user guides and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders.
• Participate in the engineering community by maintaining and sharing relevant technical approaches and modern skills in AI.
• Contribute to the establishment of best practices and standards for generative AI development within the organization.
• Independently handle complex issues with minimal supervision, while escalating only the most complex issues to appropriate staff.
Minimum Qualifications:
• Bachelor’s degree in a related field or equivalent experience
• Minimum of five years of related work experience
• You are proficient in Python and have experience with machine learning libraries and frameworks
• Have deep understanding of industry leading Foundation Model capabilities and its application.
• You are familiar with cloud-based Generative AI platforms and services
• Full stack software engineering experience to build products using Foundation Models
• Confirmed experience architecting applications, databases, services or integrations.
Job Purpose and Impact:
The Enterprise Resource Planning (ERP) Engineering Supervisor will lead a small engineering team across technology and business capabilities to build and enhance modern business applications for ERP systems in the company. In this role, you will guide team in product development, architecture and technology adherence to ensure delivered solutions are secure and scalable. You will also lead team development and cross team relationships and delivery to advance the company's engineering delivery.
Key Accountabilities:
- Lead a team of engineering professionals that design, develop, deploy and enhance the new and existing software solutions.
- Provide direction to the team to build highly scalable and resilient software products and platforms to support business needs.
- Provide input and guidance to the delivery team across technology and business capabilities to accomplish team deliverables.
- Provide support to software engineers dedicated to products in other portfolios within ERP teams.
- Partner with the engineering community to coach engineers, share relevant technical approaches, identify new trends, modern skills and present code methodologies.
Qualifications:
MINIMUM QUALIFICATIONS:
- Bachelor’s degree in a related field or equivalent experience
- Minimum of four years of related work experience
PREFERRED QUALIFICATIONS:
- Confirmed hands on technical experience with technologies including cloud, software development and continuous integration and continuous delivery
- 2 years of supervisory experience
- Experience leading engineers in the area of ERP basis, Code Development (ABAP, HTML 5, Python, Java, etc..), or Design Thinking.
at Vola Finance
Roles & Responsibilities
Basic Qualifications:
● The position requires a four-year degree from an accredited college or university.
● Three years of data engineering / AWS Architecture and security experience.
Top candidates will also have:
Proven/Strong understanding and/or experience in many of the following:-
● Experience designing Scalable AWS architecture.
● Ability to create modern data pipelines and data processing using AWS PAAS components (Glue, etc.) or open source tools (Spark, Hbase, Hive, etc.).
● Ability to develop SQL structures that support high volumes and scalability using
RDBMS such as SQL Server, MySQL, Aurora, etc.
● Ability to model and design modern data structures, SQL/NoSQL databases, Data Lakes, Cloud Data Warehouse
● Experience in creating Network Architecture for secured scalable solution.
● Experience with Message brokers such as Kinesis, Kafka, Rabbitmq, AWS SQS, AWS SNS, and Apache ActiveMQ. Hands-on experience on AWS serverless architectures such as Glue,Lamda, Redshift etc.
● Working knowledge of Load balancers, AWS shield, AWS guard, VPC, Subnets, Network gateway Route53 etc.
● Knowledge of building Disaster management systems and security logs notification system
● Knowledge of building scalable microservice architectures with AWS.
● To create a framework for monthly security checks and wide knowledge on AWS services
● Deploying software using CI/CD tools such CircleCI, Jenkins, etc.
● ML/ AI model deployment and production maintainanace experience is mandatory.
● Experience with API tools such as REST, Swagger, Postman and Assertible.
● Versioning management tools such as github, bitbucket, GitLab.
● Debugging and maintaining software in Linux or Unix platforms.
● Test driven development
● Experience building transactional databases.
● Python, PySpark programming experience .
● Must experience engineering solutions in AWS.
● Working AWS experience, AWS certification is required prior to hiring
● Working in Agile Framework/Kanban Framework
● Must demonstrate solid knowledge of computer science fundamentals like data structures & algorithms.
● Passion for technology and an eagerness to contribute to a team-oriented environment.
● Demonstrated leadership on medium to large-scale projects impacting strategic priorities.
● Bachelor’s degree in Computer science or Electrical engineering or related field is required
at DeepIntent
With a core belief that advertising technology can measurably improve the lives of patients, DeepIntent is leading the healthcare advertising industry into the future. Built purposefully for the healthcare industry, the DeepIntent Healthcare Advertising Platform is proven to drive higher audience quality and script performance with patented technology and the industry’s most comprehensive health data. DeepIntent is trusted by 600+ pharmaceutical brands and all the leading healthcare agencies to reach the most relevant healthcare provider and patient audiences across all channels and devices. For more information, visit DeepIntent.com or find us on LinkedIn.
We are seeking a skilled and experienced Site Reliability Engineer (SRE) to join our dynamic team. The ideal candidate will have a minimum of 3 years of hands-on experience in managing and maintaining production systems, with a focus on reliability, scalability, and performance. As an SRE at Deepintent, you will play a crucial role in ensuring the stability and efficiency of our infrastructure, as well as contributing to the development of automation and monitoring tools.
Responsibilities:
- Deploy, configure, and maintain Kubernetes clusters for our microservices architecture.
- Utilize Git and Helm for version control and deployment management.
- Implement and manage monitoring solutions using Prometheus and Grafana.
- Work on continuous integration and continuous deployment (CI/CD) pipelines.
- Containerize applications using Docker and manage orchestration.
- Manage and optimize AWS services, including but not limited to EC2, S3, RDS, and AWS CDN.
- Maintain and optimize MySQL databases, Airflow, and Redis instances.
- Write automation scripts in Bash or Python for system administration tasks.
- Perform Linux administration tasks and troubleshoot system issues.
- Utilize Ansible and Terraform for configuration management and infrastructure as code.
- Demonstrate knowledge of networking and load-balancing principles.
- Collaborate with development teams to ensure applications meet reliability and performance standards.
Additional Skills (Good to Know):
- Familiarity with ClickHouse and Druid for data storage and analytics.
- Experience with Jenkins for continuous integration.
- Basic understanding of Google Cloud Platform (GCP) and data center operations.
Qualifications:
- Minimum 3 years of experience in a Site Reliability Engineer role or similar.
- Proven experience with Kubernetes, Git, Helm, Prometheus, Grafana, CI/CD, Docker, and microservices architecture.
- Strong knowledge of AWS services, MySQL, Airflow, Redis, AWS CDN.
- Proficient in scripting languages such as Bash or Python.
- Hands-on experience with Linux administration.
- Familiarity with Ansible and Terraform for infrastructure management.
- Understanding of networking principles and load balancing.
Education:
Bachelor's degree in Computer Science, Information Technology, or a related field.
DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.
DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.
DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
About The Role
To design, implement, and execute testing procedures for our software applications. In this role, the candidate will be instrumental in driving our software quality assurance lifecycle, collaborating with development teams to establish test strategies, and developing automated tests to uphold our stringent quality benchmarks, thereby reducing manual regression efforts.
By integrating tests into the CI/CD pipeline, the candidate will ensure that software releases are reliable and of high quality. Additionally, the candidate will troubleshoot and diagnose issues in systems under test, contributing to the continuous improvement of the software development process.
What Describes You Best
- Minimum Bachelor's degree in Computer Science, Engineering, or a related discipline.
- 2 to 3 years experience in Automation Testing.
- Experience of working on SAAS /enterprise products is preferred.
Technical Skills: (must have)
- Sound understanding of SDLC processes and the QA lifecycle and methodology
- Hands-on experience with test Automation tools and frameworks such as Selenium WebDriver (with Java), Cucumber, Appium, or TestNG
- Proven experience in test automation using Java scripting language
- Strong Understanding of DOM
- Good experience with continuous integration/continuous deployment (CI/CD) concepts and tools like Jenkins or GitLab CI.
- Hands-on experience with any of the bug tracking and test management tools (e.g. GitLab, Jira, Jenkins, Bugzilla, etc.)
- Experience with API testing (Postman or Similar RESTClient)
Additional Skills: (nice to have)
- Knowledge of performance testing tools such as JMeter
- Knowledge of Serenity BDD Framework
- Knowledge of Python programming language
What will you Own
The key accountability of the candidate will be to maintain and enhance the QA automation process (along with CI/CD/CT), create/update test suites, write documentation, and ensure quality delivery of our software components by automation testing and also contribute to manual testing when required. Furthermore, enhance the product by utilizing automation scripting in solution development and improving processes/workflows.
How will you spend your time at Eclat
QA and Documentation
- Sketching out ideas for automated software test procedures.
- Enhancing, Optimizing, and maintaining automated CI/CD/CT workflows.
- Write, design, execute, and maintain automation scripts for web and mobile platforms.
- Maximizing test coverage for the most critical features of the application to reduce manual testing effort and quick regression.
- Reviewing software bug reports, maintaining reporting of automation test suites, and highlighting problem areas.
- Manage and Troubleshooting issues in systems under test.
- Establishing and coordinating test strategies with development/product teams.
- Manage documentation repositories and version control systems.
Post-delivery participation - Training and User Feedback
- Participating in user feedback sessions to identify and understand user persona and requirements.
- Working closely with the support team in providing necessary product technical support.
Why Join Us
- Be a part of our growth story as we aim to take a leadership position in international markets
- Opportunity to manage and lead global teams and channel partner network
- Join technology innovators who believe in solving world-scale challenges to drive global knowledge-sharing
- Healthy work/life balance, offering wellbeing initiatives, parental leave, career development assistance, required work infrastructure support
About the Company :
Nextgen Ai Technologies is at the forefront of innovation in artificial intelligence, specializing in developing cutting-edge AI solutions that transform industries. We are committed to pushing the boundaries of AI technology to solve complex challenges and drive business success.
Currently offering "Data Science Internship" for 2 months.
Data Science Projects details In which Intern’s Will Work :
Project 01 : Image Caption Generator Project in Python
Project 02 : Credit Card Fraud Detection Project
Project 03 : Movie Recommendation System
Project 04 : Customer Segmentation
Project 05 : Brain Tumor Detection with Data Science
Eligibility
A PC or Laptop with decent internet speed.
Good understanding of English language.
Any Graduate with a desire to become a web developer. Freshers are welcomed.
Knowledge of HTML, CSS and JavaScript is a plus but NOT mandatory.
Fresher are welcomed. You will get proper training also, so don't hesitate to apply if you don't have any coding background.
#please note that THIS IS AN INTERNSHIP , NOT A JOB.
We recruit permanent employees from inside our interns only (if needed).
Duration : 02 Months
MODE: Work From Home (Online)
Responsibilities
Manage reports and sales leads in salesforce.com, CRM.
Develop content, manage design, and user access to SharePoint sites for customers and employees.
Build data driven reports, store procedures, query optimization using SQL and PL/SQL knowledge.
Learned the essentials to C++ and Java to refine code and build the exterior layer of web pages.
Configure and load xml data for the BVT tests.
Set up a GitHub page.
Develop spark scripts by using Scala shell as per requirements.
Develop and A/B test improvements to business survey questions on iOS.
Deploy statistical models to various company data streams using Linux shells.
Create monthly performance-base client billing reports using MySQL and NoSQL databases.
Utilize Hadoop and MapReduce to generate dynamic queries and extract data from HDFS.
Create source code utilizing JavaScript and PHP language to make web pages functional.
Excellent problem-solving skills and the ability to work independently or as part of a team.
Effective communication skills to convey complex technical concepts.
Benefits
Internship Certificate
Letter of recommendation
Stipend Performance Based
Part time work from home (2-3 Hrs per day)
5 days a week, Fully Flexible Shift
We are seeking a skilled DevOps Engineer with 3+ years of experience to join our team on a permanent work-from-home basis.
Responsibilities:
- Develop and maintain infrastructure using Ansible.
- Write Ansible playbooks.
- Implement CI/CD pipelines.
- Manage GitLab repositories.
- Monitor and troubleshoot infrastructure issues.
- Ensure security and compliance.
- Document best practices.
Qualifications:
- Proven DevOps experience.
- Expertise with Ansible and CI/CD pipelines.
- Proficient with GitLab.
- Strong scripting skills.
- Excellent problem-solving and communication skills.
Regards,
Aishwarya M
Associate HR
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
Requirements:
- Python Experience: Minimum 3+ years.
- Software Development Experience: Minimum 8+ years.
- Data Engineering and ETL Workloads: Minimum 2+ years.
- Familiarity with Software Development Life Cycle (SDLC).
- CI/CD Pipeline Development: Experience in developing CI/CD pipelines for large projects.
- Agile Framework & Sprint Methodology: Experience with Jira.
- Source Version Control: Experience with GitHub or similar SVC.
- Team Leadership: Experience leading a team of software developers/data scientists.
Good to Have:
- Experience with Golang.
- DevOps/Cloud Experience (preferably AWS).
- Experience with React and TypeScript.
Responsibilities:
- Mentor and train a team of data scientists and software developers.
- Lead and guide the team in best practices for software development and data engineering.
- Develop and implement CI/CD pipelines.
- Ensure adherence to Agile methodologies and participate in sprint planning and execution.
- Collaborate with the team to ensure the successful delivery of projects.
- Provide on-site support and training in Pune.
Skills and Attributes:
- Strong leadership and mentorship abilities.
- Excellent problem-solving skills.
- Effective communication and teamwork.
- Ability to work in a fast-paced environment.
- Passionate about technology and continuous learning.
Note: This is a part-time position paid on an hourly basis. The initial commitment is 4-8 hours per week, with potential fluctuations.
Join TVARIT and be a pivotal part of shaping the future of software development and data engineering.
About Us:
Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).
Our mission is to serve the underserved MSME businesses with their credit needs in India. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap through a phygital model (physical branches + digital decision-making).
As a technology and data-first company, tech lovers and data enthusiasts play a crucial role in building the analytics & tech at Optimo that helps the company thrive.
What We Offer:
Join our dynamic startup team as a Senior Data Analyst and play a crucial role in core data analytics projects involving credit risk, lending strategy, credit underwriting features analytics, collections, and portfolio management. The analytics team at Optimo works closely with the Credit & Risk departments, helping them make data-backed decisions.
This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment. We believe that the freedom and accountability to make decisions in analytics and technology bring out the best in you and help us build the best for the company. This environment offers you a steep learning curve and an opportunity to experience the direct impact of your analytics contributions. Along with this, we offer industry-standard compensation.
What We Look For:
We are looking for individuals with a strong analytical mindset and a fundamental understanding of the lending industry, primarily focused on credit risk. We value not only your skills but also your attitude and hunger to learn, grow, lead, and thrive, both individually and as part of a team. We encourage you to take on challenges, bring in new ideas, implement them, and build the best analytics systems. Your willingness to put in the extra hours to build the best will be recognized.
Skills/Requirements:
- Credit Risk & Underwriting: Fundamental knowledge of credit risk and underwriting processes is mandatory. Experience in any lending financial institution is a must. A thorough understanding of all the features evaluated in the underwriting process like credit report info, bank statements, GST data, demographics, etc., is essential.
- Analytics (Python): Excellent proficiency in Python - Pandas and Numpy. A strong analytical mindset and the ability to extract actionable insights from any analysis are crucial. The ability to convert the given problem statements into actionable analytics tasks and frame effective approaches to tackle them is highly desirable.
- Good to have but not mandatory: REST APIs: A fundamental understanding of APIs and previous experience or projects related to API development or integrations. Git: Proficiency in version control systems, particularly Git. Experience in collaborative projects using Git is highly valued.
What You'll Be Working On:
- Analyze data from different data sources, extract information, and create action items to tackle the given open-ended problems.
- Build strong analytics systems and dashboards that provide easy access to data and insights, including the current status of the company, portfolio health, static pool, branch-wise performance, TAT (turnaround time) monitoring, and more.
- Assist the credit and risk team with insights and action items, helping them make data-backed decisions and fine-tune the credit policy (high involvement in the credit and underwriting process).
- Work on different rule engines that automate the underwriting process end-to-end.
Other Requirements:
- Availability for full-time work in Bangalore. Immediate joiners are preferred.
- Strong passion for analytics and problem-solving.
- At least 1 year of industry experience in an analytics role, specifically in a lending institution, is a must.
- Self-motivated and capable of working both independently and collaboratively.
If you are ready to embark on an exciting journey of growth, learning, and innovation, apply now to join our pioneering team in Bangalore.
Hi,
I am HR from Janapriya school , Miyapur , Hyderabad , Telangana.
Currently we are looking for a primary computer teacher .
the teacher should have atleast 2 years experience in teaching computers .
Intrested candidates can apply to the above posting.
Who are we?
We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.
What we are looking for
We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.
What you’ll be doing
First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.
You will work in a product team. Building products and rapidly rolling out new features and fixes.
You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!
Skills you need in order to succeed in this role
Most Important: Integrity of character, diligence and the commitment to do your best
Must Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven Development
Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing
Self-Learner: You must be extremely hands-on and obsessive about delivering clean code
- Sense of Ownership: Do whatever it takes to meet development timelines
- Experience in creating end to end data pipeline
- Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
- Working experience in Databricks
- Strong in BI/DW/Datalake Architecture, design and ETL
- Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
- Experience in object-oriented programming, data structures, algorithms and software engineering
- Experience working in Agile and Extreme Programming methodologies in a continuous deployment environment.
- Interest in mastering technologies like, relational DBMS, TDD, CI tools like Azure devops, complexity analysis and performance
- Working knowledge of server configuration / deployment
- Experience using source control and bug tracking systems,
writing user stories and technical documentation
- Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
- Expertise in creating tables, procedures, functions, triggers, indexes, views, joins and optimization of complex
- Experience with database versioning, backups, restores and
- Expertise in data security and
- Ability to perform database performance tuning queries
Mandatory Skills
- C/C++ Programming
- Linux System concepts
- Good Written and verbal communication skills
- Good problem-solving skills
- Python scripting experience
- Prior experience in Continuous Integration and Build System is a plus
- SCM tools like git, perforce etc is a plus
- Repo, Git and Gerrit tools
- Android Build system expertise
- Automation development experience with like Electric Commander, Jenkins, Hudson