50+ Python Jobs in Chennai | Python Job openings in Chennai
Apply to 50+ Python Jobs in Chennai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek skilled and experienced data science/machine learning professionals with a strong background in at least one of mathematics, nancial engineering, and electrical engineering, to join our Energy & Utilities team. If you are interested in articial intelligence, excited about solving real business problems in the energy and utilities industry, and keen to contribute to impactful projects, this role is for you!
Work you’ll do
As a data scientist in the energy and utilities industry, you will perform quantitative analysis and build mathematical models to forecast energy demand, supply and strategies of ecient load balancing. You will work on models for short term and long term pricing, improving operational eciency, reducing costs, and ensuring reliable power supply. You’ll work closely with cross-functional teams to deploy these models in solutions that provide insights/ solutions to real-world business problems. You will also be involved in conducting experiments, building POCs and prototypes.
Responsibilities
- Develop and implement quantitative models for load forecasting, energy production and distribution optimization.
- Analyze historical data to identify and predict extreme events, and measure impact of extreme events. Enhance existing pricing and risk management frameworks.
- Develop and implement quantitative models for energy pricing and risk management. Monitor market conditions and adjust models as needed to ensure accuracy and effectiveness.
- Collaborate with engineering and operations teams to provide quantitative support for energy projects. Enhance existing energy management systems and develop new strategies for energy conservation.
- Maintain and improve quantitative tools and software used in energy management.
- Support end-to-end ML/ AI model lifecycle - from data preparation, data analysis and feature engineering to model development, validation and deployment
- Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
- Document methodologies and results, present ndings and communicate insights to non-technical audiences
Skills & Requirements
- Strong background in mathematics, econometrics, electrical engineering, or a related eld.
- Experience data analysis, and quantitative modeling using programming languages such as Python or R.
- Excellent analytical and problem-solving skills.
- Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms
- Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK).
- Strong communication skills
- Strong collaboration skills, ability to work with engineering and operations teams.
- A continuous learning attitude and a problem solving mind-set
Good to have -
- Knowledge of energy markets, regulations, and utility operation.
- Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
Client based at Bangalore location.
Ab Initio Developer
About the Role:
We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.
Key Responsibilities:
· Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.
· Analyze complex data requirements and translate them into effective Ab Initio designs.
· Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.
· Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.
· Optimize performance and scalability of Ab Initio jobs.
· Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.
· Stay up-to-date with the latest Ab Initio technologies and industry best practices.
Required Skills and Experience:
· 2.5 to 8 years of hands-on experience in Ab Initio development.
· Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.
· Proficiency in Ab Initio's graphical interface and command-line tools.
· Experience in data modeling, data warehousing, and ETL concepts.
· Strong SQL skills and experience with relational databases.
· Excellent problem-solving and analytical skills.
· Ability to work independently and as part of a team.
· Strong communication and documentation skills.
Preferred Skills:
· Experience with cloud-based data integration platforms.
· Knowledge of data quality and data governance concepts.
· Experience with scripting languages (e.g., Python, Shell scripting).
· Certification in Ab Initio or related technologies.
About koolio.ai
Website: www.koolio.ai
Koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Internship Position
We are looking for a motivated Backend Development Intern to join our innovative team. As an intern at koolio.ai, you’ll have the opportunity to work on a next-gen AI-powered platform and gain hands-on experience developing and optimizing backend systems that power our platform. This internship is ideal for students or recent graduates who are passionate about backend technologies and eager to learn in a dynamic, fast-paced startup environment.
Key Responsibilities:
- Assist in the development and maintenance of backend systems and APIs.
- Write reusable, testable, and efficient code to support scalable web applications.
- Work with cloud services and server-side technologies to manage data and optimize performance.
- Troubleshoot and debug existing backend systems, ensuring reliability and performance.
- Collaborate with cross-functional teams to integrate frontend features with backend logic.
Requirements and Skills:
- Education: Currently pursuing or recently completed a degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Good understanding of server-side technologies like Python
- Familiarity with REST APIs and database systems (e.g., MySQL, PostgreSQL, or NoSQL databases).
- Exposure to cloud platforms like AWS, Google Cloud, or Azure is a plus.
- Knowledge of version control systems such as Git.
- Soft Skills:
- Eagerness to learn and adapt in a fast-paced environment.
- Strong problem-solving and critical-thinking skills.
- Effective communication and teamwork capabilities.
- Other Skills: Familiarity with CI/CD pipelines and basic knowledge of containerization (e.g., Docker) is a bonus.
Why Join Us?
- Gain real-world experience working on a cutting-edge platform.
- Work alongside a talented and passionate team committed to innovation.
- Receive mentorship and guidance from industry experts.
- Opportunity to transition to a full-time role based on performance and company needs.
This internship is an excellent opportunity to kickstart your career in backend development, build critical skills, and contribute to a product that has a real-world impact.
Key Responsibilities:
- Designing, developing, and maintaining AI/NLP-based software solutions.
- Collaborating with cross-functional teams to define requirements and implement new features.
- Optimizing performance and scalability of existing systems.
- Conducting code reviews and providing constructive feedback to team members.
- Staying up-to-date with the latest developments in AI and NLP technologies.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field.
- 6+ years of experience in software development, with a focus on Python programming.
- Strong understanding of artificial intelligence and natural language processing concepts.
- Hands-on experience with AI/NLP frameworks such as Llamaindex/Langchain, OpenAI, etc.
- Experience in implementing Retrieval-Augmented Generation (RAG) systems for enhanced AI solutions.
- Proficiency in building and deploying machine learning models.
- Excellent problem-solving skills and attention to detail.
- Strong communication and interpersonal skills.
Preferred Qualifications:
- Master's degree or higher in Computer Science, Engineering, or related field.
- Experience with cloud computing platforms such as AWS, Azure, or Google Cloud.
- Familiarity with big data technologies such as Hadoop, Spark, etc.
- Contributions to open-source projects related to AI/NLP
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.
Key Responsibilities:
- Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
- Design and build efficient, secure, and modular client-side and server-side architecture
- Develop high-performance web applications with reusable and maintainable code
- Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
- Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
- Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: Minimum of 6+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
- Technical Skills:
- Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
- Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
- Familiarity with NoSQL and PostgreSQL databases
- Experience working with audio/video processing libraries is a strong plus
- Soft Skills:
- Strong problem-solving skills and the ability to think critically about issues and solutions
- Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
- Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
- Keen attention to detail and a passion for delivering high-quality, scalable solutions
- Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment
Compensation and Benefits:
- Total Yearly Compensation: ₹25 LPA based on skills and experience
- Health Insurance: Comprehensive health coverage provided by the company
- ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact
Responsibilities:
• Analyze and understand business requirements and translate them into efficient, scalable business logic.
• Develop, test, and maintain software that meets new requirements and integrates well with existing systems.
• Troubleshoot and debug software issues and provide solutions.
• Collaborate with cross-functional teams to deliver high-quality products, including product managers, designers, and developers.
• Write clean, maintainable, and efficient code.
• Participate in code reviews and provide constructive feedback to peers.
• Communicate effectively with team members and stakeholders to understand requirements and provide updates.
Required Skills:
• Strong problem-solving skills with the ability to analyze complex issues and provide solutions.
• Ability to quickly understand new problem statements and translate them into functional business logic.
• Proficiency in at least one programming language such as Java, Node.js, or C/C++.
• Strong understanding of software development life cycle (SDLC).
• Excellent communication skills, both verbal and written.
• Team player with the ability to collaborate effectively with different teams.
Preferred Qualifications:
• Experience with Java, Golang, or Rust is a plus.
• Familiarity with cloud platforms, microservices architecture, and API development.
• Prior experience working in an agile environment.
• Strong debugging and optimization skills.
Educational Qualifications:
• Bachelor's degree in Computer Science, Engineering, related field, or equivalent work experience.
Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Role Overview:
- Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
- Implementing automated testing platforms, unit tests, and CICD Pipeline
- Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
- Understanding of container platform, such as Docker
Job Description
- We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
- Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
- You will be responsible for prompting and building use case Pipelines
- Perform the Evaluation of all the Gen-AI features and Usecase pipeline
Position: AI ML Engineer
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 4-6 years
CTC: 16.5 - 17 LPA
Employment Type: Full Time
Key Responsibilities:
- Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
- Design and develop prompts suiting project needs
- Lead and manage team of prompt engineers
- Stakeholder management across business and domains as required for the projects
- Evaluating base models and benchmarking performance
- Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
- Develop, deploy and maintain auto prompt solutions
- Design and implement minimum design standards for every use case involving prompt engineering
Skills and Qualifications
- Strong proficiency with Python, DJANGO framework and REGEX
- Good understanding of Machine learning framework Pytorch and Tensorflow
- Knowledge of Generative AI and RAG Pipeline
- Good in microservice design pattern and developing scalable application.
- Ability to build and consume REST API
- Fine tune and perform code optimization for better performance.
- Strong understanding on OOP and design thinking
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Good understanding of server-side templating languages
- Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
- Integration of APIs, multiple data sources and databases into one system
- Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
- Understanding fundamental design principles behind a scalable and distributed application
- Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
- Experience in deploying on Cloud platform like Azure or AWS
- Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform
At BigThinkCode, our technology solves complex problems. We are looking for Senior test engineer to join us at Chennai. This is an opportunity to join a growing team and make a substantial impact at BigThinkCode Technologies.
Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.
Company: BigThinkCode Technologies
URL: https://www.bigthinkcode.com/
Experience: 3 – 5 years
Level: Test engineer (Senior)
Location: Chennai (Work from Office)
Joining time: Immediate – 4 weeks of time.
Mandatory skill: API/Web services testing using Restful/SOAP, Web automation, Python programming language basics along with frameworks lile PyUnit or Pytest, TFS or Jira, OOPs concepts.
Required skills:
• 3 - 5 years or above software testing experience, including mobile app testing experience.
• In depth understanding of SDLC and STLC
• API and Webservices testing using Restful or SOAP or Postman tools.
• Basic oops concepts and Python programming language is nice to have.
• Hands on experience with testing frameworks like PyUnit (Unittest), Pytest, Nosetest, Doctest and Robot is required.
• Design, develop test cases and execute test.
• 2+ years of experience in TFS or Jira
• Identify application issues, report discrepancies, prepare test report and recommend improvements.
• Track the issues raised by UAT and Production users..
• Be familiar with common test tools (Postman/Charles/Fiddler etc.), have basic understanding of HTTP/HTTPS protocol.
• Ownership attitude of the product and team player who fits in with her/his team.
Job Title - Senior Backend Engineer
About Tazapay
Tazapay is a cross border payment service provider. They offer local collections via local payment methods, virtual accounts and cards in over 70 markets. The merchant does not need to create local entities anywhere and Tazapay offers the additional compliance framework to take care of local regulations and requirements. This results in decreased transaction costs, fx transparency and higher auth rates.
They are licensed and backed by leading investors. www.tazapay.com
What’s exciting waiting for You?
This is an amazing opportunity for you to join a fantastic crew before the rocket ship launch. It will be a story you will carry with you through your life and have the unique experience of building something ground up and have the satisfaction of seeing your product being used and paid for by thousands of customers. You will be a part of a growth story be it anywhere - Sales, Software Development, Marketing, HR, Accounting etc.
We believe in a culture of openness, innovation & great memories together.
Are you ready for the ride?
Find what interesting things you could do with us.
About the Backend Engineer role
Responsibilities (not exhaustive)
- Design, write and deliver highly scalable, reliable and fault tolerant systems with minimal guidance
- Participate in code and design reviews to maintain our high development standards
- Partner with the product management team to define and execute the feature roadmap
- Translate business requirements into scalable and extensible design
- Proactively manage stakeholder communication related to deliverables, risks, changes and dependencies
- Coordinate with cross functional teams (Mobile, DevOps, Data, UX, QA etc.) on planning and execution
- Continuously improve code quality, product execution, and customer delight
- Willingness to learn new languages and methodologies
- An enormous sense of ownership
- Engage in service capacity and demand planning, software performance analysis, tuning and optimization
The Ideal Candidate
Education
- Degree in Computer Science or equivalent with 5+ years of experience in commercial software development in large distributed systems
Experience
- Hands-on experience in designing, developing, testing and deploying applications on Golang, Ruby,Python, .Net Core or Java for large scale applications
- Deep knowledge of Linux as a production environment
- Strong knowledge of data structures, algorithms, distributed systems, and asynchronous architectures
- Expert in at least 1 of the following languages: Golang, Python, Ruby, Java, C, C++
- Proficient in OOP, including design patterns.
- Ability to design and implement low latency RESTful services
- Hands-on coder who has built backend services that handle high volume traffic.
- Strong understanding of system performance and scaling
- Possess excellent communication, sharp analytical abilities with proven design skills, able to think critically of the current system in terms of growth and stability
- Data modeling experience in both Relational and NoSQL databases
- Continuously refactor applications to ensure high-quality design
- Ability to plan, prioritize, estimate and execute releases with good degree of predictability
- Ability to scope, review and refine user stories for technical completeness and to alleviate dependency risks
- Passion for learning new things, solving challenging problems
- Ability to get stuff done!
Nice to have
- Familiarity with Golang ecosystem
- Familiarity with running web services at scale; understanding of systems internals and networking are a plus
- Be familiar with HTTP/HTTPS communication protocols.
Abilities and Traits
- Ability to work under pressure and meet deadlines
- Ability to provide exceptional attention to details of the product.
- Ability to focus for extended periods of repetitious activity.
- Ability to think ahead and anticipate problems, issues and solutions
- Work well as a team player and help the team members to resolve issues
- Be committed to quality and be structured in approach
- Excellent and demonstrable concept formulation, logical and analytical skills
- Excellent planning, organizational, and prioritization skills
Location - Chennai - India
Join our team and let's groove together to the rhythm of innovation and opportunity!
Your Buddy
Tazapay
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
at Optisol Business Solutions Pvt Ltd
Role Summary
As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.
Requirements:
· Around 4 years of working experience in data warehousing / BI system.
· Strong hands-on experience with Snowflake AND strong programming skills in Python
· Strong hands-on SQL skills
· Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.
· Knowledge on debt for cloud databases
· AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions
· Solid understanding of ETL processes, and data warehousing concepts
· Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework
· Experience with scrum methodologies
· Infrastructure build tools such as CFT / Terraform is a plus.
· Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.
· Strong team player with good communication skills.
Overview Optisol Business Solutions
OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.
Benefits, working with Optisol
· Great Learning & Development program
· Flextime, Work-at-Home & Hybrid Options
· A knowledgeable, high-achieving, experienced & fun team.
· Spot Awards & Recognition.
· The chance to be a part of next success story.
· A competitive base salary.
More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.
5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.
2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.
Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.
Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.
Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.
Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.
Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.
Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.
Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.
Experience or exposure to design and development using Full Stack tools.
Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.
Bachelor's or master's degree in computer science or related field.
Position Overview: We are seeking a talented Data Engineer with expertise in Power BI to join our team. The ideal candidate will be responsible for designing and implementing data pipelines, as well as developing insightful visualizations and reports using Power BI. Additionally, the candidate should have strong skills in Python, data analytics, PySpark, and Databricks. This role requires a blend of technical expertise, analytical thinking, and effective communication skills.
Key Responsibilities:
- Design, develop, and maintain data pipelines and architectures using PySpark and Databricks.
- Implement ETL processes to extract, transform, and load data from various sources into data warehouses or data lakes.
- Collaborate with data analysts and business stakeholders to understand data requirements and translate them into actionable insights.
- Develop interactive dashboards, reports, and visualizations using Power BI to communicate key metrics and trends.
- Optimize and tune data pipelines for performance, scalability, and reliability.
- Monitor and troubleshoot data infrastructure to ensure data quality, integrity, and availability.
- Implement security measures and best practices to protect sensitive data.
- Stay updated with emerging technologies and best practices in data engineering and data visualization.
- Document processes, workflows, and configurations to maintain a comprehensive knowledge base.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or related field. (Master’s degree preferred)
- Proven experience as a Data Engineer with expertise in Power BI, Python, PySpark, and Databricks.
- Strong proficiency in Power BI, including data modeling, DAX calculations, and creating interactive reports and dashboards.
- Solid understanding of data analytics concepts and techniques.
- Experience working with Big Data technologies such as Hadoop, Spark, or Kafka.
- Proficiency in programming languages such as Python and SQL.
- Hands-on experience with cloud platforms like AWS, Azure, or Google Cloud.
- Excellent analytical and problem-solving skills with attention to detail.
- Strong communication and collaboration skills to work effectively with cross-functional teams.
- Ability to work independently and manage multiple tasks simultaneously in a fast-paced environment.
Preferred Qualifications:
- Advanced degree in Computer Science, Engineering, or related field.
- Certifications in Power BI or related technologies.
- Experience with data visualization tools other than Power BI (e.g., Tableau, QlikView).
- Knowledge of machine learning concepts and frameworks.
Role: Python-Django Developer
Location: Noida, India
Description:
- Develop web applications using Python and Django.
- Write clean and maintainable code following best practices and coding standards.
- Collaborate with other developers and stakeholders to design and implement new features.
- Participate in code reviews and maintain code quality.
- Troubleshoot and debug issues as they arise.
- Optimize applications for maximum speed and scalability.
- Stay up-to-date with emerging trends and technologies in web development.
Requirements:
- Bachelor's or Master's degree in Computer Science, Computer Engineering or a related field.
- 4+ years of experience in web development using Python and Django.
- Strong knowledge of object-oriented programming and design patterns.
- Experience with front-end technologies such as HTML, CSS, and JavaScript.
- Understanding of RESTful web services.
- Familiarity with database technologies such as PostgreSQL or MySQL.
- Experience with version control systems such as Git.
- Ability to work in a team environment and communicate effectively with team members.
- Strong problem-solving and analytical skills.
at Wekan Enterprise Solutions
Backend - Software Development Engineer III
Experience - 7+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
- Relevant experience of 7+ years building high-performance back-end applications with at least 3 or more projects delivered using the required technologies
- Good problem solving skills
- Strong mentoring capabilities
- Good understanding of software development life cycle
- Strong experience in system design and architecture
- Strong focus on quality of work delivered
- Excellent verbal and written communication skills
Required Technical Skills
- Extensive hands-on experience building high-performance web back-ends using Node.Js and Javascript/Typescript
- Strong experience with Express.Js framework
- Working experience with Python web app development or python scripting
- Implementation experience in monolithic and microservices architecture
- Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
- Experience integrating with any 3rd party services such as cloud SDKs (AWS, Azure) authentication, etc.
- Hands-on experience with Kafka, RabbitMQ or any similar technologies.
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies
Key Tasks & Accountability:
- Collaborate with development teams and product managers to create innovative software solutions.
- Able to develop entire architecture, responsive design, user interaction, and user experience.
- The ability to use databases, proxies, APIs, version control systems, and third-party applications.
- Keep track of new development-related tools, frameworks, methods, and architectures.
- The developer is in charge of creating APIs depending on the architecture of the production application.
- Keeping up with the latest advancements in programming languages and server apps.
Skills:
- Comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries.
- Knowledge of React, Redux and API Integration.
- Experience with backend technology like NodeJs, Microservices, MVC Framework and Data Base connection.
- Knowledge on SQL/NoSQL such as MySql, mongoDB.
- Knowledge of cloud such as AWS.
- Team player with a knack for visual design and utility.
Role : Web Scraping Engineer
Experience : 2 to 3 Years
Job Location : Chennai
About OJ Commerce:
OJ Commerce (OJC), a rapidly expanding and profitable online retailer, is headquartered in Florida, USA, with a fully-functional office in Chennai, India. We deliver exceptional value to our customers by harnessing cutting-edge technology, fostering innovation, and establishing strategic brand partnerships to enable a seamless, enjoyable shopping experience featuring high-quality products at unbeatable prices. Our advanced, data-driven system streamlines operations with minimal human intervention.
Our extensive product portfolio encompasses over a million SKUs and more than 2,500 brands across eight primary categories. With a robust presence on major platforms such as Amazon, Walmart, Wayfair, Home Depot, and eBay, we directly serve consumers in the United States.
As we continue to forge new partner relationships, our flagship website, www.ojcommerce.com, has rapidly emerged as a top-performing e-commerce channel, catering to millions of customers annually.
Job Summary:
We are seeking a Web Scraping Engineer and Data Extraction Specialist who will play a crucial role in our data acquisition and management processes. The ideal candidate will be proficient in developing and maintaining efficient web crawlers capable of extracting data from large websites and storing it in a database. Strong expertise in Python, web crawling, and data extraction, along with familiarity with popular crawling tools and modules, is essential. Additionally, the candidate should demonstrate the ability to effectively utilize API tools for testing and retrieving data from various sources. Join our team and contribute to our data-driven success!
Responsibilities:
- Develop and maintain web crawlers in Python.
- Crawl large websites and extract data.
- Store data in a database.
- Analyze and report on data.
- Work with other engineers to develop and improve our web crawling infrastructure.
- Stay up to date on the latest crawling tools and techniques.
Required Skills and Qualifications:
- Bachelor's degree in computer science or a related field.
- 2-3 years of experience with Python and web crawling.
- Familiarity with tools / modules such as
- Scrapy, Selenium, Requests, Beautiful Soup etc.
- API tools such as Postman or equivalent.
- Working knowledge of SQL.
- Experience with web crawling and data extraction.
- Strong problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Excellent communication and documentation skills.
What we Offer
• Competitive salary
• Medical Benefits/Accident Cover
• Flexi Office Working Hours
• Fast paced start up
Function: Software Engineering → Backend Development
- Python
- Flask
Requirements:
- Should be a go-getter, ready to shoulder more responsibilities, shows enthusiasm and interest in work.
- Excellent core Python skills including threading, dictionary, OOPS Concept, Data structure, Web service.
- Should have work experience on following stacks/libraries: Flask
- Familiarity with some ORM (Object Relational Mapper) libraries
- Able to integrate multiple data sources and databases into one system
- Understanding of the threading limitations of Python, and multi-process architecture Familiarity with event-driven programming in Python
- Basic understanding of front-end technologies, such as Angular, JavaScript, HTML5 and CSS3
- Writing reusable, testable, and efficient code
- Design and implementation of low-latency, high-availability, and performant applications
- Understanding of accessibility and security compliance
- Experience in both RDBMS(MySQL), NoSQL databases (MongoDB, HDFS, HIVE etc) or in-memory caching technologies such as ehcache etc is preferable.
- A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
- Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
- Mentor and coach other team members
- Evaluate the performance of NLP models and ideate on how they can be improved
- Support internal and external NLP-facing APIs
- Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
- Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
- Proven ability to multi-task and deliver results within tight time frames
- Must have strong verbal and written communication skills
- Strong listening skills and eagerness to learn
- Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
- NLP
- Deep Learning
- Machine Learning
- Python
- Bert
Preferred Requirements
- Experience in Computer Vision is preferred
Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.
AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
Indian Based IT Service Organization
Greetings!!!!
We are looking for a data engineer for one of our premium clients for their Chennai and Tirunelveli location
Required Education/Experience
● Bachelor’s degree in computer Science or related field
● 5-7 years’ experience in the following:
● Snowflake, Databricks management,
● Python and AWS Lambda
● Scala and/or Java
● Data integration service, SQL and Extract Transform Load (ELT)
● Azure or AWS for development and deployment
● Jira or similar tool during SDLC
● Experience managing codebase using Code repository in Git/GitHub or Bitbucket
● Experience working with a data warehouse.
● Familiarity with structured and semi-structured data formats including JSON, Avro, ORC, Parquet, or XML
● Exposure to working in an agile work environment
About the job
Whirldata Inc. is an AI/Data Sciences/App Dev company established in 2017 to provide management and technology consulting services to small and medium enterprises across the globe. We are headquartered in California and our team works out of Chennai. The specific focus is on
- Helping clients identify areas where Data Sciences and AI-based approaches can increase revenues, decrease costs or enhance customer engagement experience
- Help clients translate business needs into process flows for exploratory and cutting-edge applications
- Provide appropriate and targeted solutions that can achieve the planned objective
Whirldatas management team consists of individuals with a combined 45+ years of management and technology consulting experience, spanning multiple verticals such as finance, energy, retail, manufacturing and supply chain/logistics. Please look up our website and go through the blogs/videos/articles to find out more about us.
Working Philosophy
Whirldata works on the principle that, larger business goals come first and any technology-intensive solutions need to support necessary business goals. Hence all engagements start as a management consulting exercise and solution building follows after a thorough understanding of business needs.At Whirldata, we put our minds together, mix technology, art & math to deliver a viable, scalable and affordable business solution. We are passionate about what we do because we know that our work has the power to improve businesses. You can join our team working at the forefront of new technology, solving the challenges that impact both the front-end and back-end architectures, and ultimately, delivering amazing global user experiences.
Who we are looking for:
Full Stack Engineer (Position based in Chennai)
The following criteria are mandatory requirements and we strongly encourage that you apply only if you meet all these criteria:
1. Minimum 3 years of work experience
2. Proven capability to understand clients business needs
3. At least one demonstrable stint of architecting a database from scratch
4. Multiple programming language capabilities a must. In-depth knowledge of at least one programming language
5. Willingness to wear multiple hats depending on our business needs will be a priority
6. At least 2 years of hands-on experience in front-end development
7. Knowledge of current tools, platforms and languages an absolute must
8. Hands-on experience in cloud technologies and project management capabilities will be considered favourably
What do you get in return
- AI, Data Sciences and App dev require individuals who are both business and tech-savvy. We will turn you into one and make you a rockstar!
- You will get to work in an environment where your effort helps the customer and you will get to see it
- We will provide on-the-job training that will make you a doer and not a talker on data sciences, AI and app dev
- Opportunities to shine and pave your own way forward. We are good at identifying your talents and providing a platform where you will get immense satisfaction from demonstrating the same.
- Of course - we will pay you well too!
If you are interested - please apply with a small note with your thoughts on why you find this opportunity exciting and why you are willing to consider a move.
Are you interested in joining the team behind Amazon’s newest innovation? Come help us work on world class software for our customers!
The Amazon Kindle Reader and Shopping Support Engineering team provides production engineering support and is also responsible for providing multifaceted services to the Kindle digital product family of development teams and working with production operations teams for software product release coordination and deployment. This job requires you to hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks that will define your success
Job responsibilities
- Provide support of incoming tickets, including extensive troubleshooting tasks, with responsibilities covering multiple products, features and services
- Work on operations and maintenance driven coding projects, primarily in Java and C++
- Software deployment support in staging and production environments
- Develop tools to aid operations and maintenance
- System and Support status reporting
- Ownership of one or more Digital products or components
- Customer notification and workflow coordination and follow-up to maintain service level agreements
- Work with support team for handing-off or taking over active support issues and creating a team specific knowledge base and skill set
BASIC QUALIFICATIONS
- 4+ years of software development, or 4+ years of technical support experience
- Experience troubleshooting and debugging technical systems
- Experience in Unix
- Experience scripting in modern program languages
- Experience in agile/scrum or related collaborative workflow
- Experience troubleshooting and documenting findings
PREFERRED QUALIFICATIONS
- Knowledge of distributed applications/enterprise applications
- Knowledge of UNIX/Linux operating system
- Experience analyzing and troubleshooting RESTful web API calls
- Exposure to iOS (SWIFT) and Android (Native) App support & development
A software developer's job description may vary depending on the organization and specific project, but generally, it includes:
- Bachelor's degree in computer science, software engineering, or a related field (sometimes, relevant experience can substitute for formal education).
- Proficiency in one or more programming languages and related technologies.
- Strong problem-solving skills and attention to detail.
- Knowledge of software development methodologies (e.g., Agile, Scrum).
- Familiarity with software development tools, IDEs, and frameworks.
- Excellent communication skills for effective collaboration with team members and stakeholders.
- Ability to work independently and in a team.
- Continuous learning to stay updated with the latest technology trends.
Company founded by ex Samsung management professionals
Key Responsibilities:
1. Design and Development: Lead the design and development of web
applications using Python and Flask, ensuring code quality, scalability, and
performance.
2. Architecture: Collaborate with the architecture team to design and implement
robust, maintainable, and scalable software solutions.
3. API Development: Develop RESTful APIs using Flask to support front-end
applications and external integrations.
4. Database Management: Design and optimize database schemas, write efficient
SQL queries, and work with databases like PostgreSQL, MySQL, or NoSQL
solutions.
5. Testing and Debugging: Write unit tests and perform code reviews to maintain
code quality. Debug and resolve complex issues as they arise.
6. Security: Implement security best practices, including data encryption,
authentication, and authorization mechanisms.
7. Performance Optimization: Identify and address performance bottlenecks in
the application, making improvements for speed and efficiency.
8. Documentation: Maintain clear and comprehensive documentation for code,
APIs, and development processes.
9. Collaboration: Collaborate with cross-functional teams, including front-end
developers, product managers, and QA engineers, to deliver high-quality
software.
10. Mentorship: Provide guidance and mentorship to junior developers, sharing your
knowledge and best practices.
Qualifications:
Bachelor's or Master's degree in Computer Science, Engineering, or a related
field.
Proven experience (4-5 years) as a Python developer, with a strong emphasis on
Flask.
Solid understanding of web development principles, RESTful APIs, and best
practices.
Proficiency in database design and SQL, as well as experience with database
systems like PostgreSQL or MySQL.
Familiarity with front-end technologies (HTML, CSS, JavaScript) and related
frameworks is a plus.
Strong problem-solving skills and the ability to work in a fast-paced, collaborative
environment.
Excellent communication skills and the ability to work effectively in a team.
Knowledge of containerization and deployment tools (e.g., Docker, Kubernetes)
is a plus.
Analytics Job Description
We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will
partner closely with leaders across the organization, working together to understand the how
and why of people, team and company challenges, workflows and culture. The team is
responsible for delivering data and insights that drive decision-making, execution, and
investments for our product initiatives.
You will work cross-functionally with product, marketing, sales, engineering, finance, and our
customer-facing teams enabling them with data and narratives about the customer journey.
You’ll also work closely with other data teams, such as data engineering and product analytics,
to ensure we are creating a strong data culture at Blend that enables our cross-functional partners
to be more data-informed.
Role : DataEngineer
Please find below the JD for the DataEngineer Role..
Location: Guindy,Chennai
How you’ll contribute:
• Develop objectives and metrics, ensure priorities are data-driven, and balance short-
term and long-term goals
• Develop deep analytical insights to inform and influence product roadmaps and
business decisions and help improve the consumer experience
• Work closely with GTM and supporting operations teams to author and develop core
data sets that empower analyses
• Deeply understand the business and proactively spot risks and opportunities
• Develop dashboards and define metrics that drive key business decisions
• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,
and Workato
• Design our Analytics and Business Intelligence architecture, assessing and
implementing new technologies that fitting
• Work with our engineering teams to continually make our data pipelines and tooling
more resilient
Who you are:
• Bachelor’s degree or equivalent required from an accredited institution with a
quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist
• Must have strong SQL and data modeling skills, with experience applying skills to
thoughtfully create data models in a warehouse environment.
• A proven track record of using analysis to drive key decisions and influence change
• Strong storyteller and ability to communicate effectively with managers and
executives
• Demonstrated ability to define metrics for product areas, understand the right
questions to ask and push back on stakeholders in the face of ambiguous, complex
problems, and work with diverse teams with different goals
• A passion for documentation.
• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a
dynamic environment.
• A bias towards communication and collaboration with business and technical
stakeholders.
• Quantitative rigor and systems thinking.
• Prior startup experience is preferred, but not required.
• Interest or experience in machine learning techniques (such as clustering, decision
tree, and segmentation)
• Familiarity with a scientific computing language, such as Python, for data wrangling
and statistical analysis
• Experience with a SQL focused data transformation framework such as dbt
• Experience with a Business Intelligence Tool such as Mode/Tableau
Mandatory Skillset:
-Very Strong in SQL
-Spark OR pyspark OR Python
-Shell Scripting
Roles & Responsibilities:
-Adopt novel and breakthrough Deep Learning/Machine Learning technology to fully solve real world problems for different industries. -Develop prototypes of machine learning models based on existing research papers.
-Utilize published/existing models to meet business requirements. Tweak existing implementations to improve efficiencies and adapt for use-case variations.
-Optimize machine learning model training and inference time. -Work closely with development and QA teams in transitioning prototypes to commercial products
-Independently work end-to-end from data collection, preparation/annotation to validation of outcomes.
-Define and develop ML infrastructure to improve efficiency of ML development workflows.
Must Have:
- Experience in productizing and deployment of ML solutions.
- AI/ML expertise areas: Computer Vision with Deep Learning. Experience with object detection, classification, recognition; document layout and understanding tasks, OCR/ICR
. - Thorough understanding of full ML pipeline, starting from data collection to model building to inference.
- Experience with Python, OpenCV and at least a few framework/libraries (TensorFlow / Keras / PyTorch / spaCy / fastText / Scikit-learn etc.)
- Years with relevant experience:
5+ -Experience or Knowledge in ML OPS.
Good to Have: NLP: Text classification, entity extraction, content summarization. AWS, Docker.
Design, implement, and improve the analytics platform
Implement and simplify self-service data query and analysis capabilities of the BI platform
Develop and improve the current BI architecture, emphasizing data security, data quality
and timeliness, scalability, and extensibility
Deploy and use various big data technologies and run pilots to design low latency
data architectures at scale
Collaborate with business analysts, data scientists, product managers, software development engineers,
and other BI teams to develop, implement, and validate KPIs, statistical analyses, data profiling, prediction,
forecasting, clustering, and machine learning algorithms
Educational
At Ganit we are building an elite team, ergo we are seeking candidates who possess the
following backgrounds:
7+ years relevant experience
Expert level skills writing and optimizing complex SQL
Knowledge of data warehousing concepts
Experience in data mining, profiling, and analysis
Experience with complex data modelling, ETL design, and using large databases
in a business environment
Proficiency with Linux command line and systems administration
Experience with languages like Python/Java/Scala
Experience with Big Data technologies such as Hive/Spark
Proven ability to develop unconventional solutions, sees opportunities to
innovate and leads the way
Good experience of working in cloud platforms like AWS, GCP & Azure. Having worked on
projects involving creation of data lake or data warehouse
Excellent verbal and written communication.
Proven interpersonal skills and ability to convey key insights from complex analyses in
summarized business terms. Ability to effectively communicate with multiple teams
Good to have
AWS/GCP/Azure Data Engineer Certification
Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA
Responsibilities:
- Parse data using Python, create dashboards in Tableau.
- Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
- Migrate Datastage jobs to Snowflake, optimize performance.
- Work with HDFS, Hive, Kafka, and basic Spark.
- Develop Python scripts for data parsing, quality checks, and visualization.
- Conduct unit testing and web application testing.
- Implement Apache Airflow and handle production migration.
- Apply data warehousing techniques for data cleansing and dimension modeling.
Requirements:
- 4+ years of experience as a Platform Engineer.
- Strong Python skills, knowledge of Tableau.
- Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
- Proficient in Unix Shell Scripting and SQL.
- Familiarity with ETL tools like DataStage and DMExpress.
- Understanding of Apache Airflow.
- Strong problem-solving and communication skills.
Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Implement consistent observability, deployment and IaC setups
- Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
- Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Lead infrastructure security audits
Requirements
- At least 7 years of experience in handling/building Production environments in AWS.
- At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Experience in security hardening of infrastructure, systems and services.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
A Senior Automation Test Engineer at HappyFox is an integral part of the QA team responsible to ensure product quality and integrity with a special focus on the development and execution of test automation.
Responsibilities:
- Owning the quality of any deliverables including new features and enhancements.
- Working closely with the product team in understanding the requirements and user workflow.
- Developing and executing test plans and test cases with a strong emphasis on using code to solve technical challenges and shorten the regression test cycle through automation.
- Performing automated API testing.
- Contributing to building of a Continuous Integration (CI) environment and ongoing process improvement activities.
- Identifying required improvements in the test and development processes; making contributions to our automation tools that address specific needs.
- Raising defects/bugs and tracking them till closure.
Requirements:
- Minimum 5 years of relevant experience in QA with at least 3 years of experience in automation.
- Expertise in Selenium tool for automation testing.
- Good understanding of the Agile software development methodology (Kanban or Scrum) and QA's role in it.
- Good knowledge of object-oriented programming, along with requisite coding and debugging skills.
- Design and development skills in Python and/or Java.
- Some knowledge of continuous integration practices.
- Excellent verbal and written communication skills.
- Experience working in SaaS based product company (optional)
- Knowledge of Performance / Load testing tool (optional)
- Experience working for a bootstrapped high-growth startup (optional)
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
We are looking for an Automation Test Engineer with a natural flair for solving complex problems and making life easier for others. You'd be a part of our dynamic QA team responsible for maintaining and enhancing the quality of our products.
Responsibilities:
- Owning the quality of any deliverables including new features and enhancements
- Working closely with the product team in understanding the requirements and user workflow
- Writing highly efficient manual test cases and performing functional, ad-hoc and regression testing
- Designing, developing and executing automation test scripts
- Raising defects/bugs and tracking them till closure
Requirements:
- Bachelor's degree in engineering/IT/computer science or related field
- 2-4 years of relevant work experience in software testing
- Professional automation QA experience, using Selenium Webdriver with Python and/or Java
- Experience in using a defect tracking system to report, track and resolve defects
- Good understanding of the Agile software development methodology (Kanban or Scrum)
- Passion for software quality assurance, problem detection and analysis
- Experience working in SaaS based product company (optional)
- Worked for a bootstrapped high-growth startup (optional)
- HIL testing with dSPACE, Software in loop validations
- Vector tools (CANape & CANalyzer)
- Service tools like ET, EST, EDT, and other tools including CADetWIN, CAT ET, Forcast, ESP.
- Knowledge and experience in protocols including J1939, CAN, CDL (Internal) and Ethernet
- Understanding and Hands-on experience with HIL hardware/harness
- Automation logics, Python, CCAT scripting, Simulators, Control and Automation desk
Experience, responsibilities, expectations from role:
- 3-4 years of Validation or Automation Experience.
- Ability to work in a product development organization with strong focus on Quality and delivery as per commitment to customers.
- Passion to delve into complex existing code, product modules and come up with the optimal solution for the problem at hand.
- Willingness to be flexible and stretch when there are critical needs within the organization.
- Excellent communication and team working skills which is expected among a global team set up.
- Develop the Test plans, validate the Features.
- Perform Regression, Mandatory & Change specific validation.
- Investigate and Support DSN team on the issues reported from the customers in the field.
- Able write the Python Scripts by looking into the Test plans.
- Acquaintance of CANape, CANalyzer Tools, MRET test setup’s
FURIOUS FOX is looking for Embedded Developers with strong coding skills in C & C++ as well as experience with Embedded Linux.
Experience : (Minimum 7-10 yrs)
• Experienced in edge processing for connected building / industrial / consumer
appliances / automotive ECU
• Have a good understanding of IoT platforms and architecture
• Deep experience in operating systems eg: Linux, freeRTOS / kernel development/device drivers.
/ sensor drivers
• Have experience with various low-level communication protocols, memory devices, messaging
framework etc.
• Have a deep understanding of design principles, design patterns, container preparations
• Have developed hardware, OS abstraction layers, and sensor handlers services to manage various BSP, os standards
• Have experience with Python edge packages.
• Have a good understanding about IoT databases for edge computing
• Good understanding of connectivity application protocols and connectivity SDK for Wi-Fi and BT / BLE
• Experienced in arm architecture, peripheral devices and hardware board configurations
• Able to set up debuggers, configure build environments, and compilers and optimize code and performance.
Skills / Tools:
• Expert at object-oriented programming
• Modular programming
• C / C++ / JavaScript / Python
• Eclipse framework
• Target deployment techniques
• IoT framework
• Test framework
Highlights :
• Having AI / ML knowledge in applications
• Have worked on wireless protocols
• Ethernet / Wi-Fi / Bluetooth / BLE
• Highly exploratory attitude
• willing to venture in and learn new
technologies.
• Have done passionate projects based on self-interest.
DESIRED SKILLS AND EXPERIENCE
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
3-5 years overall and at least 1-2 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere o Good understanding and familiarity with Linux
o Networking knowledge – Firewalls, VPNs, Load Balancers
o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana
o Git, Jenkins, Jira
Job Responsibilities:
Support, maintain, and enhance existing and new product functionality for trading software in a real-time, multi-threaded, multi-tier server architecture environment to create high and low level design for concurrent high throughput, low latency software architecture.
- Provide software development plans that meet future needs of clients and markets
- Evolve the new software platform and architecture by introducing new components and integrating them with existing ones
- Perform memory, cpu and resource management
- Analyze stack traces, memory profiles and production incident reports from traders and support teams
- Propose fixes, and enhancements to existing trading systems
- Adhere to release and sprint planning with the Quality Assurance Group and Project Management
- Work on a team building new solutions based on requirements and features
- Attend and participate in daily scrum meetings
Required Skills:
- JavaScript and Python
- Multi-threaded browser and server applications
- Amazon Web Services (AWS)
- REST
We are looking for a Big Data Engineer with java for Chennai Location
Location : Chennai
Exp : 11 to 15 Years
Job description
Required Skill:
1. Candidate should have minimum 7 years of experience as total
2. Candidate should have minimum 4 years of experience in Big Data design and development
3. Candidate should have experience in Java, Spark, Hive & Hadoop, Python
4. Candidate should have experience in any RDBMS.
Roles & Responsibility:
1. To create work plans, monitor and track the work schedule for on time delivery as per the defined quality standards.
2. To develop and guide the team members in enhancing their technical capabilities and increasing productivity.
3. To ensure process improvement and compliance in the assigned module, and participate in technical discussions or review.
4. To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalation
Regards,
Priyanka S
7P8R9I9Y4A0N8K8A7S7
Bachelor’s degree in Computer Science needed. Preferably BCA, B.E CSE, B.E IT.
Job Role:
A Technical Writer is responsible for producing high-quality technical documentation appropriate to
its intended audience. This includes working with internal teams on product and document
requirements. Writing easy-to-use user interface text or online help content is in addition to job
duties.
Primary Responsibilities:
1. Work with internal teams to get an in-depth knowledge of the product and the documentation
requirements
2. Produce high-quality documentation that meets applicable standards and is appropriate for its
intended audience
3. Write easy-to-understand user interface text, online help and developer guides
4. Create Script for Course Video Content, Technical Lesson Plans, Technical Teaching
documents, etc.,
5. Analyze existing and potential content, focusing on reuse and single-sourcing opportunities
6. Create and maintain the information architecture
Core skills:
1. Good Writing Skills in English
2. Technical Expertise in Fundamentals of Computer Science and Coding
3. Organizing Ability
4. Responsible for Maintaining Data
Data Engineer- Senior
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What are you going to do?
Design & Develop high performance and scalable solutions that meet the needs of our customers.
Closely work with the Product Management, Architects and cross functional teams.
Build and deploy large-scale systems in Java/Python.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.
Follow best practices that can be adopted in Bigdata stack.
Use your engineering experience and technical skills to drive the features and mentor the engineers.
What are we looking for ( Competencies) :
Bachelor’s degree in computer science, computer engineering, or related technical discipline.
Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.
Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.
Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.
Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.
Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.
Ability to work with distributed teams in a collaborative and productive manner.
Benefits:
Competitive Salary Packages and benefits.
Collaborative, lively and an upbeat work environment with young professionals.
Job Category: Development
Job Type: Full Time
Job Location: Bangalore
Job Summary
As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base
- Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
- Work with teams of smart collaborators. Be responsible for their appraisals and career development.
- Participate and lead executive presentations with client leadership stakeholders.
- Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
- See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.
Role & Responsibilities
- Serve as expert in Data Science, build framework to develop Production level DS/AI models.
- Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
- Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
- Lead and manage the onsite-offshore relation, at the same time adding value to the client.
- Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
- Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
- Present results, insights, and recommendations to senior management with an emphasis on the business impact.
- Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
- Lead or contribute to org level initiatives to build the Tredence of tomorrow.
Qualification & Experience
- Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
- 6-10+ years of experience in data science, building hands-on ML models
- Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
- Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
- Knowledge of programming languages SQL, Python/ R, Spark.
- Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
- Experience with cloud computing services (AWS, GCP or Azure)
- Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
- Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
- Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
- Knowledge in GPU code optimization, Spark MLlib Optimization.
- Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
- Experience with ML CI/CD pipelines.
client of peoplefirst consultants
Hi,
We are hiring for the position of Technical Lead - Full Stack for one of our premium clients.
Skills: React JS, Node JS, Python, Golang, HTML, CSS, JavaScript, Angular, Typescript.
Location : Chennai
JOB DESCRIPTION :
- Strong analytical and problem-solving skills.
- Ability to work independently, learn quickly and be proactive.
- 10-14 years of hands-on experience working on Web Full Stack technologies, with at least 4-6 years of experience developing applications with/on React/NextJS, NoSQL, and REST APIs.
- Proficiency in JavaScript/TypeScript (ES6), NodeJS, HTML5, CSS3, CSS Preprocessors, Web-pack, Gulp.
- Client-side scripting and JavaScript frameworks - jQUery, ReactJS, Redux, Babel, JSX.
- Experience in designing high-performance REST APIs and associated data structures.
- Familiarity with developing microservices using containerization technologies such as docker, Kubernetes, etc.
- Working knowledge of git and using branches for development.
- Strong analytical and problem-solving skills
- Ability to work independently, learn quickly and be proactive
- 10-14 years of hands-on experience working on Web Full Stack technologies, with at least 4-6 years of experience developing applications with/on React/NextJS, NoSQL, REST APIs
- Proficiency in JavaScript/TypeDcript (ES6), NodeJS, HTML5, CSS3, CSS Preprocessors, Webpack, Gulp
- Client-side scripting and JavaScript frameworks – jQUery, ReactJS, Redux, Babel, JSX
- Experience in designing high-performance REST APIs and associated data structures
- Familiarity with developing microservices using containerization technologies such as docker, Kubernetes, etc.
- Working knowledge of git and using branches for development
at Altimetrik
Location: Chennai, Pune,Banglore,jaipurExp: 5 yrs to 8 yrs
- Implement best practices for the engineering team across code hygiene, overall architecture design, testing, and deployment activities
- Drive technical decisions for building data pipelines, data lakes, and analyst access.
- Act as a leader within the engineering team, providing support and mentorship for teammates across functions
- Bachelor’s Degree in Computer Science or equivalent job experience
- Experienced developer in large data environments
- Experience using Git productively in a team environment
- Experience with Docker
- Experience with Amazon Web Services
- Ability to sit with business or technical SMEs to listen, learn and propose technical solutions to business problems
· Experience using and adapting to new technologies
· Take and understand business requirements and goals
· Work collaboratively with project managers and stakeholders to make sure that all aspects of the project are delivered as planned
· Strong SQL skills with MySQL or PostgreSQL
- Experience with non-relational databases and their role in web architectures desired
Knowledge and Experience:
- Good experience with Elixir and functional programming a plus
- Several years of python experience
- Excellent analytical and problem-solving skills
- Excellent organizational skills
Proven verbal and written cross-department and customer communication skills
Job Brief
We are looking for a strong mobile app developer who welcomes both engineering and maintenance tasks. The primary focus will be to implement new user interfaces and features together with automated unit and integration tests.
You will be working with our candid and collaborative team, where your knowledge and advice about application architecture and the newest mobile technologies will be highly appreciated. The code you write will need to be cleanly organized and of the highest quality. You’ll also help ensure solid application performance and an excellent user experience.
Main Responsibilities
Your responsibilities will include:
- Developing new features and user interfaces from wireframe models
- Ensuring the best performance and user experience of the application
- Fixing bugs and performance problems
- Writing clean, readable, and testable code
Cooperating with back-end developers, designers, and the rest of the team to deliver well-architected and high-quality solutions
Key Requirements
-
Extensive knowledge about mobile app development. This includes the whole process, from the first line of code to publishing in the store(s)
-
Deep knowledge of Android, iOS
-
Familiarity with RESTful APIs and mobile libraries for networking.
-
Familiarity with the JSON format
-
Experience with profiling and debugging mobile applications
-
Strong knowledge of architectural patterns—MVP, MVC, MVVM, and Clean Architecture—and the ability to choose the best solution for the app
-
Familiarity with Git
-
Familiarity with push notifications
-
Understanding mobile app design guidelines on each platform and being aware of their differences
-
Proficiency in Kotlin, Swift, Java, Python
-
GPS, Accelerometer, Gyroscope, Magnetometer, G-sensor
-
B.Tech CSE/IT/ECE