50+ Python Jobs in Pune | Python Job openings in Pune
Apply to 50+ Python Jobs in Pune on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Job Purpose:
To develop and maintain robust, scalable Python-based applications at Nirmitee.io, contributing to our product engineering and healthcare technology solutions with a focus on efficiency, innovation, and best practices.
Key Responsibilites:
- Python Development:Write clean, efficient, and maintainable Python code
- Develop and maintain scalable applications using Python frameworks like Django or Flask
- Implement RESTful APIs and integrate with front-end technologies
- Database Management:Work with both SQL and NoSQL databases, optimizing queries and ensuring data integrity
- Design and implement database structures to support application requirements
- Cloud and DevOps:Deploy and maintain applications in cloud environments (e.g., AWS, GCP)
- Implement CI/CD pipelines for automated testing and deployment
- Quality Assurance:Write and maintain comprehensive unit tests and integration tests
- Participate in code reviews to ensure high code quality and share knowledge
- Collaboration and Innovation:Work closely with cross-functional teams to deliver integrated solutions
- Stay updated with the latest Python ecosystem developments and suggest improvements
Required Skills And Qualification:
- 3+ years of experience in Python development
- Bachelor's degree in Computer Science, Engineering, or a related field
- Proficiency in Python and its ecosystem (e.g., Django, Flask, FastAPI)
- Experience with SQL and NoSQL databases
- Familiarity with cloud platforms (preferably AWS) and containerization (Docker)
- Understanding of software development best practices and design patterns
Soft Skills:
- Strong problem-solving and analytical skills
- Excellent communication and teamwork abilities
- Self-motivated and able to work independently when required
- Adaptable and eager to learn new technologies
Company Culture:
Nirmitee.io is an innovative IT services company, driven by a passion for technology and a commitment to delivering exceptional solutions in product engineering and healthcare technology. We foster a culture of creativity, collaboration, and continuous learning.
Job Overview:
Python Lead responsibilities include developing and maintaining AI pipelines, including data preprocessing, feature extraction, model training, and evaluation.
Responsibilities:
- Designing, developing, and implementing generative AI models and algorithms utilizing state-of-the-art techniques such as GPT, VAE, and GANs.
- Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services.
- 7+ years of Experience in creating rest api using popular python web frameworks like Django, flask or fastapi.
- Knowledge of databases like postgres, elastic, mongo etc.
- Knowledge of working with external integrations like redis, kafka, s3, ec2 etc.
- Some experience in ML integrations will be a plus.
Requirements:
- Work experience as a Python Developer
- Team spirit
- Good problem-solving skills
- Proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras.
- strong knowledge of data structures, algorithms, and software engineering principles
- Nice to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face
Job Description
We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.
Preferred Qualifications
- Experience with microservices architecture.
- Knowledge of cloud platforms (AWS, Azure).
- Familiarity with Agile/Scrum methodologies.
- Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.
Requirment Details
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
Proven experience as a Java Developer or similar role.
Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).
Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.
Familiarity with RESTful APIs and web services.
Understanding of version control systems (e.g., Git).
Solid understanding of object-oriented programming (OOP) principles.
Strong problem-solving skills and attention to detail.
Client based at Pune location.
Mandatory Skills – Python, Docker, MongoDB, Sql, Flask, SQL Alchemy, AWS
Additional Skills- Angular, Azure
Job Description
Responsibilities:
● Responsible for handling the product issues’ analysis, troubleshooting, and resolution.
● Take ownership of providing technical support for the issues that could not be resolved by the Level 1 & Level 2 customer support teams.
● Responsible for handling product related technical queries from the stakeholders.
● Coordinating the issues internally with the Team Leads, Product Managers, Roadmap Team, and Cross-functional Teams.
● Implement meaningful workarounds to shorten the customer downtime. Subsequently follow-up and ensure the implementation of the permanent solutions.
● Identify the root cause of an issue, document it, and provide feedback to the roadmap team on corrective actions to ensure such cases are not missed in the future.
● Participate in or organize a war room call to troubleshoot the issue, with collaboration with the cross-functional teams.
● Ensuring meeting the agreed SLAs
● Incident management, reporting, RCA documentation in Jira
● Plan, design, develop, and maintain software application defect fixes in a timely manner
● Write clean, scalable, and efficient code.
● Implement software solutions using Python, Flask, Angular, PostgreSQL, SQLAlchemy, and AWS services..
● Conduct unit testing and integration testing to ensure software quality.
● Collaborate with Quality Assurance teams to identify and fix software defects.
● Create and maintain technical documentation for software projects.
● Use version control systems (Git) to manage and track code changes.
● Participate in code reviews to ensure code quality, consistency, and best practices.
● Work as part of an agile development team to deliver software increments.
● Open to learn from each other in the team and each experience day-to-day.
Qualifications:
● Bachelor’s degree in computer science, Software Engineering, or a related field.
● Proven experience as a Software Engineer or in a similar role.
● Proficient in one or more programming languages.
● Strong problem-solving and analytical skills.
● Excellent communication and teamwork skills.
● Ready to work in flexible working hours when needed.
Requirements:
● Experience with web application development and support (frontend and/or backend).
● Have at least 5 years of experience in developing & supporting Python related applications.
● Experience in building & supporting applications with containerization such as Docker
● Experience in writing relatively complex DB queries (any relational DB) is required.
● Experience in building web platforms with Angular 9+ is nice to have.
● Experience in building RESTful APIs using Python and web frameworks such as Flask is required.
● Experience in working in any of the ORM tools like SQLAlchemy, Django is required.
● Experience with cloud computing platforms i.e. AWS (other Cloud knowledge is also considered on Azure or
Google Cloud) is nice to have.
● Should be comfortable with Agile methodologies, such as Scrum, Kanban
● Key competencies required: Support, Problem-Solving, Analytical, Collaboration, and Accountability
● AWS® certifications would be an advantage.
About Lean Technologies
Lean is on a mission to revolutionize the fintech industry by providing developers with a universal API to access their customers' financial accounts across the Middle East. We’re breaking down infrastructure barriers and empowering the growth of the fintech industry. With Sequoia leading our $33 million Series A round, Lean is poised to expand its coverage across the region while continuing to deliver unparalleled value to developers and stakeholders.
Join us and be part of a journey to enable the next generation of financial innovation. We offer competitive salaries, private healthcare, flexible office hours, and meaningful equity stakes to ensure long-term alignment. At Lean, you'll work on solving complex problems, build a lasting legacy, and be part of a diverse, inclusive, and equal opportunity workplace.
About the role:
Are you a highly motivated and experienced software engineer looking to take your career to the next level? Our team at Lean is seeking a talented engineer to help us build the distributed systems that allow our engineering teams to deploy our platform in multiple geographies across various deployment solutions. You will work closely with functional heads across software, QA, and product teams to deliver scalable and customizable release pipelines.
Responsibilities
- Distributed systems architecture – understand and manage the most complex systems
- Continual reliability and performance optimization – enhancing observability stack to improve proactive detection and resolution of issues
- Employing cutting-edge methods and technologies, continually refining existing tools to enhance performance and drive advancements
- Problem-solving capabilities – troubleshooting complex issues and proactively reducing toil through automation
- Experience in technical leadership and setting technical direction for engineering projects
- Collaboration skills – working across teams to drive change and provide guidance
- Technical expertise – depth skills and ability to act as subject matter expert in one or more of: IAAC, observability, coding, reliability, debugging, system design
- Capacity planning – effectively forecasting demand and reacting to changes
- Analyze and improve efficiency, scalability, and stability of various system resources
- Incident response – rapidly detecting and resolving critical incidents. Minimizing customer impact through effective collaboration, escalation (including periodic on-call shifts) and postmortems
Requirements
- 10+ years of experience in Systems Engineering, DevOps, or SRE roles running large-scale infrastructure, cloud, or web services
- Strong background in Linux/Unix Administration and networking concepts
- We work on OCI but would accept candidates with solid GCP/AWS or other cloud providers’ knowledge and experience
- 3+ years of experience with managing Kubernetes clusters, Helm, Docker
- Experience in operating CI/CD pipelines that build and deliver services on the cloud and on-premise
- Work with CI/CD tools/services like Jenkins/GitHub-Actions/ArgoCD etc.
- Experience with configuration management tools either Ansible, Chef, Puppet, or equivalent
- Infrastructure as Code - Terraform
- Experience in production environments with both relational and NoSQL databases
- Coding with one or more of the following: Java, Python, and/or Go
Bonus
- MultiCloud or Hybrid Cloud experience
- OCI and GCP
Why Join Us?
At Lean, we value talent, drive, and entrepreneurial spirit. We are constantly on the lookout for individuals who identify with our mission and values, even if they don’t meet every requirement. If you're passionate about solving hard problems and building a legacy, Lean is the right place for you. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, gender, sexual orientation, or disability.
Availability: Full time
Location: Pune, India
Experience: 4- 6 years
Tvarit Solutions Private Limited (wholly owned subsidiary of TVARIT GmbH, Germany). TVARIT provides software to reduce manufacturing waste like scrap, energy, and machine downtime using its patented technology. With its software products, and highly competent team from renowned Universities, TVARIT has gained customer trust across 4 continents within a short span of 5 years. TVARIT is awarded among the top 8 out of 490 AI companies by European Data Incubator, apart from many more awards by the German government and industrial organizations making TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a passionate Full Stack Developer Level 2 to join our technology team in Pune. You will be responsible for handling operations, design, development, testing, leading the software development team and working toward infrastructure development that will support the company’s solutions. You will get an opportunity to work closely on projects that will involve the automation of the manufacturing process.
Key responsibilities
- Creating Plugins for Open-Source framework Grafana using React & Golang.
- Develop pixel-perfect implementation of the front end using React.
- Design efficient DB interaction to optimize performance.
- Interact with and build Python APIs.
- Collaborate across teams and lead/train the junior developers.
- Design and maintain functional requirement documents and guides.
- Get feedback from, and build solutions for, users and customers.
Must have worked on these technologies.
- 2+ years of experience working with React-Typescript on a production level
- Experience with API creation using node.js or Python
- GitHub or any other SVC
- Have worked with any Linux/Unix-based operating system (Ubuntu, Debian, MacOS, etc)
Good to have experience:
- Python-based backend technologies, relational and no-relational databases, Python Web Frameworks (Django or Flask)
- Experience with the Go programming language
- Experience working with Grafana, or on any other micro frontend architecture framework
- Experience with Docker
- Leading a team for at least a year
Benefits and perks:
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunities to avail industry-driven mentorship programs.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
How it's like to work for a Startup?
Working for TVARIT (deep-tech German IT Startup) can offer you a unique blend of innovation, collaboration, and growth opportunities. But it's essential to approach it with a willingness to adapt and thrive in a dynamic environment.
If this position sparked your interest, do apply today!
By submitting my documents for the recruitment process, I agree that my data will be processed for the purpose of the recruitment process and stored for an additional 6 months after the process is completed. Without your consent, we unfortunately cannot consider your documents for the recruitment process. You can revoke your consent at any time. Further information on how we process your data can be found in our privacy policy at the following link: https://tvarit.com/privacy-policy/
Durch das Abschicken meiner Unterlagen für den Rrecruitingprozess erkläre ich mich damit einverstanden, dass meine Daten zum Zweck des Recruitingprozesses verarbeitet und nach Abschluss des Recruitingprozesses für weitere 6 Monate gespeichert werden. Ohne dein Einverständnis können wir deine Unterlagen für den Rrecruitingprozess leider nicht berücksichtigen. Du kannst dein Einverständnis jederzeit widerrufen. Weitere Informationen, wie wir deine Daten verarbeiten findest du in unserer Datenschutzerklärung unter folgendem Link: https://tvarit.com/privacy-policy/
Who are we aka "About Us":
We are an early-stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Baker street Fintech Pvt Ltd (Parent Company) might be the place for you. We have a flat, ownership-oriented culture, and deliver world-class quality. You will be working with a founding team that has delivered over 26 industry-leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team.
As Cambridge Wealth, we are well-established in the wealth and mutual fund distribution segment, having won awards from BSE Star as well as Mutual Fund houses. Our UHNI/HNI/NRI clients include renowned professionals from various industries.
What are we looking for a.k.a “The JD” :
We are seeking a skilled and detail-oriented Data Analyst to join our product team. As a Data Analyst, you will play a crucial role in extracting, analysing, and interpreting complex financial data to drive strategic decision-making and optimize our data solutions. The ideal candidate should possess a strong foundation in SQL / NoSQL databases, Python programming, and proficiency in tools like PostgreSQL and Excel. A deep understanding of financial concepts is also a plus. Additionally, having an interest in business intelligence tools and machine learning will be valuable for this role.
Responsibilities:
- Proficient in writing complex SQL Queries
- Utilize Python for data manipulation, analysis, and visualisation, using libraries such as pandas, matplotlib, psycopg etc.
- Perform database optimization, indexing, and query tuning to ensure high performance.
- Monitor and maintain data quality, troubleshoot data-related issues, and implement solutions to optimize data integrity and performance.
- Design, configure, and maintain PostgreSQL databases
- Set up and manage database clusters, replication, and backups for disaster recovery
Preferred Qualifications:
- Intermediate-level Excel skills for data analysis and reporting.
- Strong communication skills to present findings effectively and recommendations to both technical and non-technical stakeholders.
- Detail-oriented mindset with a commitment to data accuracy and quality.
*(Only Applicants who have finished their educational commitments are requested to apply)
Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:
- Has worked (0-1.5 years preferably) or is looking to work specifically with an early-stage startup.
- You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and process from the ground up.
- You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
- This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements.
- You want complete ownership for your role & be able to drive it the way you think is right.
- You can be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
- Are looking to stick around for the long term and grow with the company.
Who are we a.k.a “About Cambridge Wealth” :
We are an early stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Bakerstreet Fintech might be the place for you. We have a flat, ownership-oriented culture, and deliver world class quality. You will be working with a founding team that has delivered over 26 industry leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team.
What are we looking for a.k.a “The JD” :
We are looking for a motivated and energetic Flutter Intern who will be running and designing product application features across various cross platform devices. Just like Lego boxes that fit on top of one another, we are looking out for someone who has experience using Flutter widgets that can be plugged together, customised and deployed anywhere.
What will you be doing at CW a.k.a “Your Responsibilities :
- Create multi-platform apps for iOS / Android using Flutter Development Framework.
- Participation in the process of analysis, designing, implementation and testing of new apps.
- Apply industry standards during the development process to ensure high quality.
- Translate designs and wireframes into high quality code.
- Ensure the best possible performance, quality, and responsiveness of the application.
- Help maintain code quality, organisation, and automatisation.
- Work on bug fixing and improving application performance
What should our ideal candidate have a.k.a “Your Requirements”:
- Knowledge of mobile app development.
- Worked at any stage startup or have developed projects of their own ideas.
- Good knowledge of Flutter and interest in developing mobile applications.
- Available for full time (in-office) internship.
Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:
- You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and processes from the ground up.
- You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
- This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements
- You want complete ownership for your role & be able to drive it the way you think is right. You are looking to stick around for the long term and grow with the company.
- You have the ability to be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
Freshers are welcome to apply for a Trainee position.
Please note that this is an On-site/ Work from Office opportunity at our headquarters at Prabhat Road, Pune
at Wissen Technology
Job Requirements:
Intermediate Linux Knowledge
- Experience with shell scripting
- Familiarity with Linux commands such as grep, awk, sed
- Required
Advanced Python Scripting Knowledge
- Strong expertise in Python
- Required
Ruby
- Nice to have
Basic Knowledge of Network Protocols
- Understanding of TCP/UDP, Multicast/Unicast
- Required
Packet Captures
- Experience with tools like Wireshark, tcpdump, tshark
- Nice to have
High-Performance Messaging Libraries
- Familiarity with tools like Tibco, 29West, LBM, Aeron
- Nice to have
Well-established Pune-based NonStop io Technologies seeks extremely talented Python developers to help innovate in the software development space. You must have 2+ years experience building and maintaining Python applications with a focus on web development. You should have a strong understanding of web frameworks like Django or Flask, and front-end technologies. You must be able to write clean, scalable, and efficient code, and you should be able to collaborate effectively with team members.
You should have a Bachelor's degree in Computer Science or equivalent experience. Excellent communication skills are essential. Familiarity with cloud services, version control systems, and Agile practices would be beneficial but is not necessary.
Expect talented, motivated, intense, and interesting co-workers. Must be willing to work from Kharadi, Pune
Your compensation will include competitive salary and benefits.
We are an equal opportunity employer.
About Jeeva.ai
At Jeeva.ai, we're on a mission to revolutionize the future of work by building AI employees that automate all manual tasks—starting with AI Sales Reps. Our vision is simple: "Anything that doesn’t require deep human connection can be automated & done better, faster & cheaper with AI." We’ve created a fully automated SDR using AI that generates 3x more pipeline than traditional sales teams at a fraction of the cost.
As a dynamic startup we are backed by Alt Capital (founded by Jack Altman & Sam Altman), Marc Benioff (CEO Salesforce), Gokul (Board Coinbase), Bonfire (investors in ChowNow), Techtsars (investors in Uber), Sapphire (investors in LinkedIn), Microsoft with $1M ARR in just 3 months after launch, we’re not just growing - we’re thriving and making a significant impact in the world of artificial intelligence.
As we continue to scale, we're looking for mid-senior Full Stack Engineers who are passionate, ambitious, and eager to make an impact in the AI-driven future of work.
About You
- Experience: 3+ years of experience as a Full Stack Engineer with a strong background in React, Python, MongoDB, and AWS.
- Automated CI/CD: Experienced in implementing and managing automated CI/CD pipelines using GitHub Actions and AWS Cloudformation.
- System Architecture: Skilled in architecting scalable solutions for systems at scale, leveraging caching strategies, messaging queues and async/await paradigms for highly performant systems
- Cloud-Native Expertise: Proficient in deploying cloud-native apps using AWS (Lambda, API Gateway, S3, ECS), with a focus on serverless architectures to reduce overhead and boost agility..
- Development Tooling: Proficient in a wide range of development tools such as FastAPI, React State Management, REST APIs, Websockets and robust version control using Git.
- AI and GPTs: Competent in applying AI technologies, particularly in using GPT models for natural language processing, automation and creating intelligent systems.
- Impact-Driven: You've built and shipped products that users love and have seen the impact of your work at scale.
- Ownership: You take pride in owning projects from start to finish and are comfortable wearing multiple hats to get the job done.
- Curious Learner: You stay ahead of the curve, eager to explore and implement the latest technologies, particularly in AI.
- Collaborative Spirit: You thrive in a team environment and can work effectively with both technical and non-technical stakeholders.
- Ambitious: You have a hunger for success and are eager to contribute to a fast-growing company with big goals.
What You’ll Be Doing
- Build and Innovate: Develop and scale AI-driven products like Gigi (AI Outbound SDR), Jim (AI Inbound SDR), Automate across voice & video with AI.
- Collaborate Across Teams: Work closely with our Product, GTM, and Engineering teams to deliver world-class AI solutions that drive massive value for our customers.
- Integrate and Optimize: Create seamless integrations with popular platforms like Salesforce, LinkedIn, and HubSpot, enhancing our AI’s capabilities.
- Problem Solving: Tackle challenging problems head-on, from data pipelines to user experience, ensuring that every solution is both functional and delightful.
- Drive AI Adoption: Be a key player in transforming how businesses operate by automating workflows, lead generation, and more with AI.
Who are we looking for?
We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.
Job Summary
- Supporting company mission by understanding complex business problems through data-driven solutions.
- Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ...
- Developing end-to-end ML production-ready solutions and visualizations.
- Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards.
- Communicating complex technical concepts and findings to non-technical stakeholders of the projects
- Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms.
- Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings.
- Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models.
Qualification and experience
- B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields.
- 5+ years of professional experience in the field of machine learning, and data science.
- Experience with large-scale Time-series data-based production code development is a plus.
Skills and competencies
- Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must.
- Ability to work on multiple projects. Must have strong design and implementation skills.
- Ability to conduct research based on complex business problems.
- Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
- Must have programming experience in Python.
- Excellent English communication skills, both written and verbal.
Benefits and Perks
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
Hiring Process
- Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements.
- First Round: Technical round 1 to gauge your domain knowledge and functional expertise.
- Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
- Final HR Round: Culture fit round and compensation discussions.
- Offer: Congratulations you made it!
If this position sparked your interest, apply now to initiate the screening process.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
at DeepIntent
With a core belief that advertising technology can measurably improve the lives of patients, DeepIntent is leading the healthcare advertising industry into the future. Built purposefully for the healthcare industry, the DeepIntent Healthcare Advertising Platform is proven to drive higher audience quality and script performance with patented technology and the industry’s most comprehensive health data. DeepIntent is trusted by 600+ pharmaceutical brands and all the leading healthcare agencies to reach the most relevant healthcare provider and patient audiences across all channels and devices. For more information, visit DeepIntent.com or find us on LinkedIn.
We are seeking a skilled and experienced Site Reliability Engineer (SRE) to join our dynamic team. The ideal candidate will have a minimum of 3 years of hands-on experience in managing and maintaining production systems, with a focus on reliability, scalability, and performance. As an SRE at Deepintent, you will play a crucial role in ensuring the stability and efficiency of our infrastructure, as well as contributing to the development of automation and monitoring tools.
Responsibilities:
- Deploy, configure, and maintain Kubernetes clusters for our microservices architecture.
- Utilize Git and Helm for version control and deployment management.
- Implement and manage monitoring solutions using Prometheus and Grafana.
- Work on continuous integration and continuous deployment (CI/CD) pipelines.
- Containerize applications using Docker and manage orchestration.
- Manage and optimize AWS services, including but not limited to EC2, S3, RDS, and AWS CDN.
- Maintain and optimize MySQL databases, Airflow, and Redis instances.
- Write automation scripts in Bash or Python for system administration tasks.
- Perform Linux administration tasks and troubleshoot system issues.
- Utilize Ansible and Terraform for configuration management and infrastructure as code.
- Demonstrate knowledge of networking and load-balancing principles.
- Collaborate with development teams to ensure applications meet reliability and performance standards.
Additional Skills (Good to Know):
- Familiarity with ClickHouse and Druid for data storage and analytics.
- Experience with Jenkins for continuous integration.
- Basic understanding of Google Cloud Platform (GCP) and data center operations.
Qualifications:
- Minimum 3 years of experience in a Site Reliability Engineer role or similar.
- Proven experience with Kubernetes, Git, Helm, Prometheus, Grafana, CI/CD, Docker, and microservices architecture.
- Strong knowledge of AWS services, MySQL, Airflow, Redis, AWS CDN.
- Proficient in scripting languages such as Bash or Python.
- Hands-on experience with Linux administration.
- Familiarity with Ansible and Terraform for infrastructure management.
- Understanding of networking principles and load balancing.
Education:
Bachelor's degree in Computer Science, Information Technology, or a related field.
DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.
DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.
DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
Requirements:
- Python Experience: Minimum 3+ years.
- Software Development Experience: Minimum 8+ years.
- Data Engineering and ETL Workloads: Minimum 2+ years.
- Familiarity with Software Development Life Cycle (SDLC).
- CI/CD Pipeline Development: Experience in developing CI/CD pipelines for large projects.
- Agile Framework & Sprint Methodology: Experience with Jira.
- Source Version Control: Experience with GitHub or similar SVC.
- Team Leadership: Experience leading a team of software developers/data scientists.
Good to Have:
- Experience with Golang.
- DevOps/Cloud Experience (preferably AWS).
- Experience with React and TypeScript.
Responsibilities:
- Mentor and train a team of data scientists and software developers.
- Lead and guide the team in best practices for software development and data engineering.
- Develop and implement CI/CD pipelines.
- Ensure adherence to Agile methodologies and participate in sprint planning and execution.
- Collaborate with the team to ensure the successful delivery of projects.
- Provide on-site support and training in Pune.
Skills and Attributes:
- Strong leadership and mentorship abilities.
- Excellent problem-solving skills.
- Effective communication and teamwork.
- Ability to work in a fast-paced environment.
- Passionate about technology and continuous learning.
Note: This is a part-time position paid on an hourly basis. The initial commitment is 4-8 hours per week, with potential fluctuations.
Join TVARIT and be a pivotal part of shaping the future of software development and data engineering.
Greetings , Wissen Technology is Hiring for the position of Data Engineer
Please find the Job Description for your Reference:
JD
- Design, develop, and maintain data pipelines on AWS EMR (Elastic MapReduce) to support data processing and analytics.
- Implement data ingestion processes from various sources including APIs, databases, and flat files.
- Optimize and tune big data workflows for performance and scalability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Manage and monitor EMR clusters, ensuring high availability and reliability.
- Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and store data in data lakes and data warehouses.
- Implement data security best practices to ensure data is protected and compliant with relevant regulations.
- Create and maintain technical documentation related to data pipelines, workflows, and infrastructure.
- Troubleshoot and resolve issues related to data processing and EMR cluster performance.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in data engineering, with a focus on big data technologies.
- Strong experience with AWS services, particularly EMR, S3, Redshift, Lambda, and Glue.
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with big data frameworks and tools such as Hadoop, Spark, Hive, and Pig.
- Solid understanding of data modeling, ETL processes, and data warehousing concepts.
- Experience with SQL and NoSQL databases.
- Familiarity with CI/CD pipelines and version control systems (e.g., Git).
- Strong problem-solving skills and the ability to work independently and collaboratively in a team environment
Job Description:
Ideal experience required – 6-8 years
- Mandatory hands-on experience in .Net Core (8)
- Mandatory hands-on experience in Angular (10+ version required)
- Azure and Microservice Architecture experience is good to have.
- No database or domain constraint
Skills:
- 7 to 10 years of working experience in managing .net projects closely with internal and external clients in structured contexts in an international environment.
- Strong knowledge of .Net Core, .NET MVC, C#, SQL Server & JavaScript
- Working experience in Angular
- Familiar with various design and architectural patterns
- Should be familiar with Git source code management for code repository.
- Should be able to write clean, readable, and easily maintainable code.
- Understanding of fundamental design principles for building a scalable application
- Experience in implementing automated testing platforms and unit test.
Nice to have:
- AWS
- Elastic Search
- Mongo DB
Responsibilities:
- Should be able to handle modules/project independently with minor supervision.
- Should be good in troubleshooting and problem-solving skills.
- Should be able to take complete ownership of modules and projects.
- Should be able to communicate and coordinate with multiple teams.
- Must have good verbal & written communication skill.
at Monsoonfish
Preferred Skills:
- Experience with XML-based web services (SOAP, REST).
- Knowledge of database technologies (SQL, NoSQL) for XML data storage.
- Familiarity with version control systems (Git, SVN).
- Understanding of JSON and other data interchange formats.
- Certifications in XML technologies are a plus.
Sr. Data Engineer (Data Warehouse-Snowflake)
Experience: 5+yrs
Location: Pune (Hybrid)
As a Senior Data engineer with Snowflake expertise you are a subject matter expert who is curious and an innovative thinker to mentor young professionals. You are a key person to convert Vision and Data Strategy for Data solutions and deliver them. With your knowledge you will help create data-driven thinking within the organization, not just within Data teams, but also in the wider stakeholder community.
Skills Preferred
- Advanced written, verbal, and analytic skills, and demonstrated ability to influence and facilitate sustained change. Ability to convey information clearly and concisely to all levels of staff and management about programs, services, best practices, strategies, and organizational mission and values.
- Proven ability to focus on priorities, strategies, and vision.
- Very Good understanding in Data Foundation initiatives, like Data Modelling, Data Quality Management, Data Governance, Data Maturity Assessments and Data Strategy in support of the key business stakeholders.
- Actively deliver the roll-out and embedding of Data Foundation initiatives in support of the key business programs advising on the technology and using leading market standard tools.
- Coordinate the change management process, incident management and problem management process.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Drive implementation efficiency and effectiveness across the pilots and future projects to minimize cost, increase speed of implementation and maximize value delivery
Knowledge Preferred
- Extensive knowledge and hands on experience with Snowflake and its different components like User/Group, Data Store/ Warehouse management, External Stage/table, working with semi structured data, Snowpipe etc.
- Implement and manage CI/CD for migrating and deploying codes to higher environments with Snowflake codes.
- Proven experience with Snowflake Access control and authentication, data security, data sharing, working with VS Code extension for snowflake, replication, and failover, optimizing SQL, analytical ability to troubleshoot and debug on development and production issues quickly is key for success in this role.
- Proven technology champion in working with relational, Data warehouses databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Highly Experienced in building and optimizing complex queries. Good with manipulating, processing, and extracting value from large, disconnected datasets.
- Your experience in handling big data sets and big data technologies will be an asset.
- Proven champion with in-depth knowledge of any one of the scripting languages: Python, SQL, Pyspark.
Primary responsibilities
- You will be an asset in our team bringing deep technical skills and capabilities to become a key part of projects defining the data journey in our company, keen to engage, network and innovate in collaboration with company wide teams.
- Collaborate with the data and analytics team to develop and maintain a data model and data governance infrastructure using a range of different storage technologies that enables optimal data storage and sharing using advanced methods.
- Support the development of processes and standards for data mining, data modeling and data protection.
- Design and implement continuous process improvements for automating manual processes and optimizing data delivery.
- Assess and report on the unique data needs of key stakeholders and troubleshoot any data-related technical issues through to resolution.
- Work to improve data models that support business intelligence tools, improve data accessibility and foster data-driven decision making.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Manage and lead technical design and development activities for implementation of large-scale data solutions in Snowflake to support multiple use cases (transformation, reporting and analytics, data monetization, etc.).
- Translate advanced business data, integration and analytics problems into technical approaches that yield actionable recommendations, across multiple, diverse domains; communicate results and educate others through design and build of insightful presentations.
- Exhibit strong knowledge of the Snowflake ecosystem and can clearly articulate the value proposition of cloud modernization/transformation to a wide range of stakeholders.
Relevant work experience
Bachelors in a Science, Technology, Engineering, Mathematics or Computer Science discipline or equivalent with 7+ Years of experience in enterprise-wide data warehousing, governance, policies, procedures, and implementation.
Aptitude for working with data, interpreting results, business intelligence and analytic best practices.
Business understanding
Good knowledge and understanding of Consumer and industrial products sector and IoT.
Good functional understanding of solutions supporting business processes.
Skill Must have
- Snowflake 5+ years
- Overall different Data warehousing techs 5+ years
- SQL 5+ years
- Data warehouse designing experience 3+ years
- Experience with cloud and on-prem hybrid models in data architecture
- Knowledge of Data Governance and strong understanding of data lineage and data quality
- Programming & Scripting: Python, Pyspark
- Database technologies such as Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL)
Nice to have
- Demonstrated experience in modern enterprise data integration platforms such as Informatica
- AWS cloud services: S3, Lambda, Glue and Kinesis and API Gateway, EC2, EMR, RDS, Redshift and Kinesis
- Good understanding of Data Architecture approaches
- Experience in designing and building streaming data ingestion, analysis and processing pipelines using Kafka, Kafka Streams, Spark Streaming, Stream sets and similar cloud native technologies.
- Experience with implementation of operations concerns for a data platform such as monitoring, security, and scalability
- Experience working in DevOps, Agile, Scrum, Continuous Delivery and/or Rapid Application Development environments
- Building mock and proof-of-concepts across different capabilities/tool sets exposure
- Experience working with structured, semi-structured, and unstructured data, extracting information, and identifying linkages across disparate data sets
Primary Skills
DynamoDB, Java, Kafka, Spark, Amazon Redshift, AWS Lake Formation, AWS Glue, Python
Skills:
Good work experience showing growth as a Data Engineer.
Hands On programming experience
Implementation Experience on Kafka, Kinesis, Spark, AWS Glue, AWS Lake Formation.
Excellent knowledge in: Python, Scala/Java, Spark, AWS (Lambda, Step Functions, Dynamodb, EMR), Terraform, UI (Angular), Git, Mavena
Experience of performance optimization in Batch and Real time processing applications
Expertise in Data Governance and Data Security Implementation
Good hands-on design and programming skills building reusable tools and products Experience developing in AWS or similar cloud platforms. Preferred:, ECS, EKS, S3, EMR, DynamoDB, Aurora, Redshift, Quick Sight or similar.
Familiarity with systems with very high volume of transactions, micro service design, or data processing pipelines (Spark).
Knowledge and hands-on experience with server less technologies such as Lambda, MSK, MWAA, Kinesis Analytics a plus.
Expertise in practices like Agile, Peer reviews, Continuous Integration
Roles and responsibilities:
Determining project requirements and developing work schedules for the team.
Delegating tasks and achieving daily, weekly, and monthly goals.
Responsible for designing, building, testing, and deploying the software releases.
Salary: 25LPA-40LPA
Job Description:
· Proficient In Python.
· Good knowledge of Stress/Load Testing and Performance Testing.
· Knowledge in Linux.
About Us
Sahaj Software is an artisanal software engineering firm built on the values of trust, respect, curiosity, and craftsmanship, and delivering purpose-built solutions to drive data-led transformation for organisations. Our emphasis is on craft as we create purpose-built solutions, leveraging Data Engineering, Platform Engineering and Data Science with a razor-sharp focus to solve complex business and technology challenges and provide customers with a competitive edge
About The Role
As a Data Engineer, you’ll feel at home if you are hands-on, grounded, opinionated and passionate about delivering comprehensive data solutions that align with modern data architecture approaches. Your work will range from building a full data platform to building data pipelines or helping with data architecture and strategy. This role is ideal for those looking to have a large impact and huge scope for growth, while still being hands-on with technology. We aim to allow growth without becoming “post-technical”.
Responsibilities
- Collaborate with Data Scientists and Engineers to deliver production-quality AI and Machine Learning systems
- Build frameworks and supporting tooling for data ingestion from a complex variety of sources
- Consult with our clients on data strategy, modernising their data infrastructure, architecture and technology
- Model their data for increased visibility and performance
- You will be given ownership of your work, and are encouraged to propose alternatives and make a case for doing things differently; our clients trust us and we manage ourselves.
- You will work in short sprints to deliver working software
- You will be working with other data engineers in Sahaj and work on building Data Engineering capability across the organisation
You can read more about what we do and how we think here: https://sahaj.ai/client-stories/
Skills you’ll need
- Demonstrated experience as a Senior Data Engineer in complex enterprise environments
- Deep understanding of technology fundamentals and experience with languages like Python, or functional programming languages like Scala
- Demonstrated experience in the design and development of big data applications using tech stacks like Databricks, Apache Spark, HDFS, HBase and Snowflake
- Commendable skills in building data products, by integrating large sets of data from hundreds of internal and external sources would be highly critical
- A nuanced understanding of code quality, maintainability and practices like Test Driven Development
- Ability to deliver an application end to end; having an opinion on how your code should be built, packaged and deployed using CI/CD
- Understanding of Cloud platforms, DevOps, GitOps, and Containers
What will you experience as a culture at Sahaj?
At Sahaj, people's collective stands for a shared purpose where everyone owns the dreams, ideas, ideologies, successes, and failures of the organisation - a synergy that is rooted in the ethos of honesty, respect, trust, and equitability. At Sahaj, you will experience
- Creativity
- Ownership
- Curiosity
- Craftsmanship
- A culture of trust, respect and transparency
- Opportunity to collaborate with some of the finest minds in the industry
- Work across multiple domains
What are the benefits of being at Sahaj?
- Unlimited leaves
- Life Insurance & Private Health insurance paid by Sahaj
- Stock options
- No hierarchy
- Open Salaries
We are looking for QA role who has experience into Python ,AWS,and chaos engineering tool(Monkey,Gremlin)
⦁ Strong understanding of distributed systems
- Cloud computing (AWS), and networking principles.
- Ability to understand complex trading systems and prepare and execute plans to induce failures
- Python.
- Experience with chaos engineering tooling such as Chaos Monkey, Gremlin, or similar
Domain: - Investment Banking or Electronic Trading is mandatory
- Develop (Python/Py test) automation tests in all components (e.g. API testing, client-server testing, E2E testing etc.) to meet product requirements and customer usages
- Hands-On experience in Python
- Proficiency in test automation frameworks and tools such as Selenium, Cucumber.
- Experience working in a Microsoft Windows and Linux environment
- Experience using Postman and automated API testing
- Experience designing & executing load/stress and performance testing
- Experience using test cases & test execution management tools and issues management tools (e.g Jira), and development environments (like Visual Studio, IntelliJ, or Eclipse).
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
A modern work platform means a single source of truth for your desk and deskless employees alike, where everything they need is organized and easy to find.
MangoApps was designed to unify your employee experience by combining intranet, communication, collaboration, and training into one intuitive, mobile-accessible workspace.
We are looking for a highly capable machine learning engineer to optimize our machine learning systems. You will be evaluating existing machine learning (ML) processes, performing statistical analysis to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in a related ML role. A machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software.
AI/ML Engineer Responsibilities:
- Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models.
- Transforming data science prototypes and applying appropriate ML algorithms and tools.
- Ensuring that algorithms generate accurate user recommendations.
- Turning unstructured data into useful information by auto-tagging images and text-to-speech conversions.
- Solving complex problems with multi-layered data sets, as well as optimizing existing machine learning libraries and frameworks.
- Developing ML algorithms to huge volumes of historical data to make predictions.
- Running tests, performing statistical analysis, and interpreting test results.
- Documenting machine learning processes.
- Keeping abreast of developments in machine learning.
AI/ML Engineer Requirements:
- Bachelor's degree in computer science, data science, mathematics, or a related field with at least 3+yrs of experience as an AI/ML Engineer
- Advanced proficiency with Python and FastAPI framework along with good exposure to libraries like scikit-learn, Pandas, NumPy etc..
- Experience in working on ChatGPT, LangChain (Must), Large Language Models (Good to have) & Knowledge Graphs
- Extensive knowledge of ML frameworks, libraries, data structures, data modelling, and software architecture.
- In-depth knowledge of mathematics, statistics, and algorithms.
- Superb analytical and problem-solving abilities.
- Great communication and collaboration skills.
Why work with us
- We take delight in what we do, and it shows in the products we offer and ratings of our products by leading industry analysts like IDC, Forrester and Gartner OR independent sites like Capterra.
- Be part of the team that has a great product-market fit, solving some of the most relevant communication and collaboration challenges faced by big and small organizations across the globe.
- MangoApps is highly collaborative place and careers at MangoApps come with a lot of growth and learning opportunities. If you’re looking to make an impact, MangoApps is the place for you.
- We focus on getting things done and know how to have fun while we do them. We have a team that brings creativity, energy, and excellence to every engagement.
- A workplace that was listed as one of the top 51 Dream Companies to work for by World HRD Congress in 2019.
- As a group, we are flat and treat everyone the same.
Benefits
We are a young organization and growing fast. Along with the fantastic workplace culture that helps you meet your career aspirations; we provide some comprehensive benefits.
1. Comprehensive Health Insurance for Family (Including Parents) with no riders attached.
2. Accident Insurance for each employee.
3. Sponsored Trainings, Courses and Nano Degrees.
About You
· Self-motivated: You can work with a minimum of supervision and be capable of strategically prioritizing multiple tasks in a proactive manner.
· Driven: You are a driven team player, collaborator, and relationship builder whose infectious can-do attitude inspires others and encourages great performance in a fast-moving environment.
· Entrepreneurial: You thrive in a fast-paced, changing environment and you’re excited by the chance to play a large role.
· Passionate: You must be passionate about online collaboration and ensuring our clients are successful; we love seeing hunger and ambition.
· Thrive in a start-up mentality with a “whatever it takes” attitude.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Dear Connections,
We are hiring! Join our dynamic team as a QA Automation Tester (Python, Java, Selenium, API, SQL, Git)! We're seeking a passionate professional to contribute to our innovative projects. If you thrive in a collaborative environment, possess expertise in Python, Java, Selenium, and Robot Framework, and are ready to make an impact, apply now! Wissen Technology is committed to fostering innovation, growth, and collaboration. Don't miss this chance to be part of something extraordinary.
Company Overview:
Wissen is the preferred technology partner for executing transformational projects and accelerating implementation through thought leadership and a solution mindset. It is a leading IT solutions and consultancy firm dedicated to providing innovative and customized solutions to global clients. We leverage cutting-edge technologies to empower businesses and drive digital transformation.
#jobopportunity #hiringnow #joinourteam #career #wissen #QA #automationtester #robot #apiautomation #sql #java #python #selenium
About Company:
Our client is the industry-leading provider of CRM messaging solutions. As a forward-thinking global company, it continues to innovate and develop cutting-edge solutions that redefine how businesses digitally communicate with their customers. It works with 2500 customers across 190 countries with customers ranging from SMBs to large global enterprises.
About the role:
The Director of Product Management is responsible for overseeing and implementing product development policies, objectives, and initiatives as well as leading research for new products, product enhancements, and product design.
Roles & responsibilities:
- Become a product expert on all company's solutions
- Build and own the product roadmap and timeline.
- Develop and execute a go-to-market strategy that addresses product, pricing, messaging, competitive positioning, product launch and promotion.
- Work with Development leaders to oversee development resources, including managing ROI, timelines, and deliverables.
- Work with the leadership team on driving product strategy, in both new and existing products, to increase overall market share, revenue and customer loyalty.
- Implement and communicate the strategic and technical direction for the department.
- Engage directly with customers to understand market needs and product requirements.
- Develop/implement a suite of Key Performance Indicators (KPI's) to measure product performance including profitability, customer satisfaction metrics, compliance, and delivery efficiency.
- Define and measure value of software solutions to establish and quantify customer ROI.
- Represent the company by visiting customers to solicit feedback on company products and services.
- Monitors and reports progress of projects within agreed upon timeframes.
- Write very high quality BRD, PRDs, Epics and User Stories
- Creates functional strategies and specific objectives as well as develops budgets, policies, and procedures.
- Creates and analyzes financial proposals related to product development and provides supporting content showing allocation of funds to execute these plans.
- Write status updates, iteration delivery and release notes as necessary
- Display a high level of critical thinking in cross-functional process analysis and problem resolution for new and existing products.
- Develop & conduct specialized training on new products launched and raise awareness & application of relevant subject matter.
- Monitor internal processes for efficiency and validity pre & post product launch/changes.
Requirements:
- Excellent communication skills, both verbal and in writing.
- Strong customer focus paired with exceptional presentation skills.
- Skilled at data analytics focused on identifying opportunities, driving insights, and measuring value.
- Strong problem-solving skills.
- Ability to work effectively in a diverse team environment.
- Proven strategic and tactical leadership, motivation, and decision-making skills
Required Education & Experience:
- Bachelor's Degree in Technology related field.
- Experience in working with a geographically diverse development team.
- Strong technical background with the ability to understand and discuss technical concepts.
- Proven experience in Software Development and Product Management.
- 12+ years of experience leading product teams in a fast-paced business environment as Product Leader on Software Platform or SaaS solution.
- Proven ability to lead and influence cross-functional teams.
- Demonstrated success in delivering high-impact products.
Preferred Qualifications
- Transition from software development role to product management.
- Experience building messaging solutions or marketing or support solutions.
- Experience with agile development methodologies.
- Familiarity with design thinking principles.
- Knowledge of relevant technologies and industry trends.
- Strong project management skills.
Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.
We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends.
What will you work on?
- Interface with clients
- Recommend tech stacks
- Define end-to-end logical and cloud-native architectures
- Define APIs
- Integrate with 3rd party systems
- Create architectural solution prototypes
- Hands-on coding, team lead, code reviews, and problem-solving
What Makes You A Great Fit?
- 5+ years of software experience
- Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
- Solid expertise and hands-on experience in Python with Flask or Django
- Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
- Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
- Knowledge of DevOps practices
- Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
- Excellent communication skills, verbal and written
The job is for a full-time position at our https://goo.gl/maps/o67FWr1aedo">Pune (Viman Nagar) office.
(Note: We are working remotely at the moment. However, once the COVID situation improves, the candidate will be expected to work from our office.)
Hiring alert 🚨
Calling all #PythonDevelopers looking for an #ExcitingJobOpportunity 🚀 with one of our #Insurtech clients.
Are you a Junior Python Developer eager to grow your skills in #BackEnd development?
Our company is looking for someone like you to join our dynamic team. If you're passionate about Python and ready to learn from seasoned developers, this role is for you!
📣 About the company
The client is a fast-growing consultancy firm, helping P&C Insurance companies on their digital journey. With offices in Mumbai and New York, they're at the forefront of insurance tech. Plus, they offer a hybrid work culture with flexible timings, typically between 9 to 5, to accommodate your work-life balance.
💡 What you’ll do
📌 Work with other developers.
📌 Implement Python code with assistance from senior developers.
📌 Write effective test cases such as unit tests to ensure it is meeting the software design requirements.
📌 Ensure Python code when executed is efficient and well written.
📌 Refactor old Python code to ensure it follows modern principles.
📌 Liaise with stakeholders to understand the requirements.
📌 Ensure integration can take place with front end systems.
📌 Identify and fix code where bugs have been identified.
🔎 What you’ll need
📌 Minimum 3 years of experience writing AWS Lambda using Python
📌 Knowledge of other AWS services like CloudWatch and API Gateway
📌 Fundamental understanding of Python and its frameworks.
📌 Ability to write simple SQL queries
📌 Familiarity with AWS Lambda deployment
📌 The ability to problem-solve.
📌 Fast learner with an ability to adapt techniques based on requirements.
📌 Knowledge of how to effectively test Python code.
📌 Great communication and collaboration skills.
Full Stack Developer Job Description
Position: Full Stack Developer
Department: Technology/Engineering
Location: Pune
Type: Full Time
Job Overview:
As a Full Stack Developer at Invvy Consultancy & IT Solutions, you will be responsible for both front-end and back-end development, playing a crucial role in designing and implementing user-centric web applications. You will collaborate with cross-functional teams including designers, product managers, and other developers to create seamless, intuitive, and high-performance digital solutions.
Responsibilities:
Front-End Development:
Develop visually appealing and user-friendly front-end interfaces using modern web technologies such as C# Coding, HTML5, CSS3, and JavaScript frameworks (e.g., React, Angular, Vue.js).
Collaborate with UX/UI designers to ensure the best user experience and responsive design across various devices and platforms.
Implement interactive features, animations, and dynamic content to enhance user engagement.
Optimize application performance for speed and scalability.
Back-End Development:
Design, develop, and maintain the back-end architecture using server-side technologies (e.g., Node.js, Python, Ruby on Rails, Java, .NET).
Create and manage databases, including data modeling, querying, and optimization.
Implement APIs and web services to facilitate seamless communication between front-end and back-end systems.
Ensure security and data protection by implementing proper authentication, authorization, and encryption measures.
Collaborate with DevOps teams to deploy and manage applications in cloud environments (e.g., AWS, Azure, Google Cloud).
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
Proven experience as a Full Stack Developer or similar role.
Proficiency in front-end development technologies like HTML5, CSS3, JavaScript, and popular frameworks (React, Angular, Vue.js, etc.).
Strong experience with back-end programming languages and frameworks (Node.js, Python, Ruby on Rails, Java, .NET, etc.).
Familiarity with database systems (SQL and NoSQL) and their integration with web applications.
Knowledge of web security best practices and application performance optimization.
at DeepIntent
Who We Are:
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.
What You’ll Do:
We are looking for a Senior Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams
- Develop and standardized all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data based decisioning
- Optimize queries and data access efficiencies, serve as expert in how to most efficiently attain desired data points
- Build “mastered” versions of the data for Analytics specific querying use cases
- Help with data ETL, table performance optimization
- Establish formal data practice for the Analytics practice in conjunction with rest of DeepIntent
- Build & operate scalable and robust data architectures
- Interpret analytics methodology requirements and apply to data architecture to create standardized queries and operations for use by analytics teams
- Implement DataOps practices
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics specific objectives
- Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts
- Operate between Engineers and Analysts to unify both practices for analytics insight creation
Who You Are:
- Adept in market research methodologies and using data to deliver representative insights
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases
- Deep SQL experience is a must
- Exceptional communication skills with ability to collaborate and translate with between technical and non technical needs
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data
- Experience working with public clouds like GCP/AWS
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies
- Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
- Proficient with SQL,Python or JVM based language, Bash
- Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc.and big data databases like BigQuery, Clickhouse, etc
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious
- Comfortable to work in EST Time Zone
at DeepIntent
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.
What You’ll Do:
We are looking for a talented candidate with several years of experience in software Quality Assurance to join our QA team. This position will be at an individual contributor level as part of a collaborative, fast-paced team. As a member of the QA team, you will work closely with Product Managers and Developers to understand application features and create robust comprehensive test plans, write test cases, and work closely with the developers to make the applications more testable. We are looking for a well-rounded candidate with solid analytical skills, an enthusiasm for taking ownership of features, a strong commitment to quality, and the ability to work closely and communicate effectively with development and other teams. Experience with the following is preferred:
- Python
- Perl
- Shell Scripting
- Selenium
- Test Automation (QA)
- Software Testing (QA)
- Software Development (MUST HAVE)
- SDET (MUST HAVE)
- MySQL
- CI/CD
Who You Are:
- Hands on Experience with QA Automation Framework development & Design (Preferred language Python)
- Strong understanding of testing methodologies
- Scripting
- Strong problem analysis and troubleshooting skills
- Experience in databases, preferably MySQL
- Debugging skills
- REST/API testing experience is a plus
- Integrate end-to-end tests with CI/CD pipelines and monitor and improve metrics around test coverage
- Ability to work in a dynamic and agile development environment and be adaptable to changing requirements
- Performance testing experience with relevant automation and monitoring tools
- Exposure to Dockerization or Virtualization is a plus
- Experience working in the Linux/Unix environment
- Basic understanding of OS
DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.
DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.
DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
The role is with a Fintech Credit Card company based in Pune within the Decision Science team. (OneCard )
About
Credit cards haven't changed much for over half a century so our team of seasoned bankers, technologists, and designers set out to redefine the credit card for you - the consumer. The result is OneCard - a credit card reimagined for the mobile generation. OneCard is India's best metal credit card built with full-stack tech. It is backed by the principles of simplicity, transparency, and giving back control to the user.
The Engineering Challenge
“Re-imaging credit and payments from First Principles”
Payments is an interesting engineering challenge in itself with requirements of low latency, transactional guarantees, security, and high scalability. When we add credit and engagement into the mix, the challenge becomes even more interesting with underwriting and recommendation algorithms working on large data sets. We have eliminated the current call center, sales agent, and SMS-based processes with a mobile app that puts the customers in complete control. To stay agile, the entire stack is built on the cloud with modern technologies.
Purpose of Role :
- Develop and implement the collection analytics and strategy function for the credit cards. Use analysis and customer insights to develop optimum strategy.
CANDIDATE PROFILE :
- Successful candidates will have in-depth knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques. They will be an adept communicator with good interpersonal skills to work with senior stake holders in India to grow revenue primarily through identifying / delivering / creating new, profitable analytics solutions.
We are looking for someone who:
- Proven track record in collection and risk analytics preferably in Indian BFSI industry. This is a must.
- Identify & deliver appropriate analytics solutions
- Experienced in Analytics team management
Essential Duties and Responsibilities :
- Responsible for delivering high quality analytical and value added services
- Responsible for automating insights and proactive actions on them to mitigate collection Risk.
- Work closely with the internal team members to deliver the solution
- Engage Business/Technical Consultants and delivery teams appropriately so that there is a shared understanding and agreement as to deliver proposed solution
- Use analysis and customer insights to develop value propositions for customers
- Maintain and enhance the suite of suitable analytics products.
- Actively seek to share knowledge within the team
- Share findings with peers from other teams and management where required
- Actively contribute to setting best practice processes.
Knowledge, Experience and Qualifications :
Knowledge :
- Good understanding of collection analytics preferably in Retail lending industry.
- Knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques and market trends
- Knowledge of different modelling frameworks like Linear Regression, Logistic Regression, Multiple Regression, LOGIT, PROBIT, time- series modelling, CHAID, CART etc.
- Knowledge of Machine learning & AI algorithms such as Gradient Boost, KNN, etc.
- Understanding of decisioning and portfolio management in banking and financial services would be added advantage
- Understanding of credit bureau would be an added advantage
Experience :
- 4 to 8 years of work experience in core analytics function of a large bank / consulting firm.
- Experience on working on Collection analytics is must
- Experience on handling large data volumes using data analysis tools and generating good data insights
- Demonstrated ability to communicate ideas and analysis results effectively both verbally and in writing to technical and non-technical audiences
- Excellent communication, presentation and writing skills Strong interpersonal skills
- Motivated to meet and exceed stretch targets
- Ability to make the right judgments in the face of complexity and uncertainty
- Excellent relationship and networking skills across our different business and geographies
Qualifications :
- Masters degree in Statistics, Mathematics, Economics, Business Management or Engineering from a reputed college
About UpSolve
We built and deliver complex AI solutions which help drive business decisions faster and more accurately. We are a typical AI company and have a range of solutions developed on Video, Image and Text.
What you will do
- Stay informed on new technologies and implement cautiously
- Maintain necessary documentation for the project
- Fix the issues reported by application users
- Plan, build, and design solutions with a mental note of future requirements
- Coordinate with the development team to manage fixes, code changes, and merging
Location: Mumbai
Working Mode: Remote
What are we looking for
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- Minimum 2 years of professional experience in software development, with a focus on machine learning and full stack development.
- Strong proficiency in Python programming language and its machine learning libraries such as TensorFlow, PyTorch, or scikit-learn.
- Experience in developing and deploying machine learning models in production environments.
- Proficiency in web development technologies including HTML, CSS, JavaScript, and front-end frameworks such as React, Angular, or Vue.js.
- Experience in designing and developing RESTful APIs and backend services using frameworks like Flask or Django.
- Knowledge of databases and SQL for data storage and retrieval.
- Familiarity with version control systems such as Git.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in a fast-paced and dynamic team environment.
- Good to have Cloud Exposure
ROLE DESCRIPTION
The ideal candidate will be passionate about building resilient, scalable, and high-performance distributed systems products. This individual will thrive and succeed in delivering high-quality technology products in a fast-paced and rapid growth environment where priorities could shift quickly. We are looking for an engineer who prioritizes well, communicates clearly, and understands how to drive a high level of focus and excellence within a strong team. This person has an innate drive to build a culture centered on customer focus, efficient execution, high quality, rigorous testing, deep monitoring, and solid software engineering practices.
WHO WILL LOVE THIS JOB?
• Attracted to creativity, innovation, and eagerness to learn.
• Alignment to a fast-paced organization and its short-term and long-term goals.
• An engaging, open, genuine personality that naturally encourages interaction with individuals at all levels. • Strong value system and sense of ethics.
• Absolute dedication to premium quality.
• Want to build a strong core product team capable of developing solutions for complex, industry-first problems
• Build a balance of experience, knowledge and new learnings
ROLE & RESPONSIBILITIES
• Driving the success of the software engineering team at Datamotive.
• Driving go-nogo decisions of product releases to customers.
• Drive QE team for developing test scenarios & product automation.
• Collaborating with senior and peer engineers to identify and improve upon feature improvements.
• Build a strong customer focussed mindset for qualifying product features and use cases.
• Develop, Build & Perform functional, scale and performance testing.
• Assist in Identifying, Researching & Designing newer features and cloud platform support in areas of disaster recovery, data protection, workload migration etc.
• Conduct pilot tests to assess the functionality of newly developed programs.
• Front-facing customers for product introduction, knowledge transfer, solutions, bug triaging etc.
• Assist customers by giving product demos, conducting POCs, training etc.
• Manage Datamotive infrastructure, bringing innovative automation for optimizing infrastructure usage through monitoring and scripting.
• Design test environments to simulate customer behaviors and use cases in VMware vSphere, AWS, GCP, and Azure clouds.
• Help write technical documentation, and generate marketing content like blogs, webinars, seminars etc.
TECHNICAL SKILLS
• 8 - 12 years of experience in software testing with a relevant domain understanding of Data Protection, Disaster Recovery, and Ransomware Recovery.
• A strong understanding and demonstrable experience with at least one of the major public cloud platforms (GCP, AWS, Azure or VMware)
• A strong understanding and experience in qualifying complex, distributed systems at feature, scale and performance.
• Insights into the development of client-server SaaS applications with good breadth across networking, storage, micro-services, and other web technologies.
• Programming knowledge in either Python, Shell scripts or Powershell.
• Strong knowledge of test automation frameworks E.g. Selenium, Cucumber, and Robot frameworks
• Should be a computer science graduate with strong fundamentals & problem-solving abilities.
• Good understanding of virtualization, storage and cloud platforms like VMware, AWS, GCP, Azure and/or Kubernetes will be preferable.
WHAT’S IN IT FOR YOU?
• Impact. Backed by our TEAM, Investors and Advisors, Datamotive is on the path to rapid growth. As we take our products to the market, your position will be vital as you play a crucial role in innovating and developing our products, identifying new features, and filing patents, while also gaining personal experience and responsibilities. As a key player in our company's success, the impact of your work will be felt as we grow as an organization.
• Career Growth. At Datamotive, we highly value the input made by each employee to help us achieve our company goals. To this end, we strive to ensure that everyone has access, and exposure to be up-to-date in the industry, and to learn and improve their expertise. We ensure that each employee is given exposure to understanding the functional and technical elements of our products as well as all related business functions. As your knowledge grows, so do the opportunities for advancement to more senior opportunities or into other areas of our business. We strive to be a company where you can truly chart out a career path for yourself.
WHO WILL LOVE THIS JOB?
• Attracted to creativity, innovation, and eagerness to learn
• Alignment to a fast-paced organization and its short-term and long-term goals
• An engaging, open, genuine personality that naturally encourages interaction with individuals at all levels
• Strong value system and sense of ethics
• Absolute dedication to premium quality
• Want to build strong core product team capable of developing solutions for complex, industry-first problems.
• Build balance of experience, knowledge, and new learnings
ROLES AND RESPONSIBILITIES?
• Driving the success of the software engineering team at Datamotive.
• Collaborating with senior and peer engineers to prioritize and deliver features on the roadmap.
• Build strong development team with focus on building optimized & usable solutions.
• Research, Design & Develop distributed solution to handle workload mobility across multi & hybrid clouds
• Assist in Identifying, Researching & Designing newer features and cloud platform support in areas of disaster recovery, data protection, workload migration etc.
• Assist in building product roadmap.
• Conduct pilot tests to assess the functionality of newly developed programs.
• Front facing customers for product introduction, knowledge transfer, solutioning, bugs triaging etc.
• Assist customers by giving product demos, conducting POCs, trainings etc.
• Manage Datamotive infrastructure, bring innovative automation for optimizing infrastructure usage through monitoring and scripting.
• Design test environments to simulate customer behaviours and use cases in VMware vSphere, AWS, GCP, Azure clouds.
• Help write technical documentation, generate marketing content like blogs, webinars, seminars etc.
TECHNICAL SKILLS
• 3 – 8 years of experience in software development with relevant domain understanding of Data Protection, Disaster Recovery, Ransomware Recovery.
• A strong understanding and demonstrable experience with at least one of the major public cloud platforms (GCP, AWS, Azure or VMware)
• A strong understanding and experience of designing and developing architecture of complex, distributed systems.
• Insights into development of client-server SaaS applications with good breadth across networking, storage, micro-services, and other web technologies.
• Experience of building and leading strong development teams with systems product development background
• Programming knowledge in either of GO Lang, C, C++, Python or Shell script.
• Should be a computer science graduate with strong fundamentals & problem-solving abilities.
• Good understanding of virtualization, storage and cloud platforms like VMware, AWS, GCP, Azure and/or Kubernetes will be preferable
About Us
Mindtickle provides a comprehensive, data-driven solution for sales readiness and enablement that fuels revenue growth and brand value for dozens of Fortune 500 and Global 2000 companies and hundreds of the world’s most recognized companies across technology, life sciences, financial services, manufacturing, and service sectors.
With purpose-built applications, proven methodologies, and best practices designed to drive effective sales onboarding and ongoing readiness, mindtickle enables company leaders and sellers to continually assess, diagnose and develop the knowledge, skills, and behaviors required to engage customers and drive growth effectively. We are funded by great investors, like – Softbank, Canaan partners, NEA, Accel Partners, and others.
Job Brief
We are looking for a rockstar researcher at the Center of Excellence for Machine Learning. You are responsible for thinking outside the box, crafting new algorithms, developing end-to-end artificial intelligence-based solutions, and rightly selecting the most appropriate architecture for the system(s), such that it suits the business needs, and achieves the desired results under given constraints.
Credibility:
- You must have a proven track record in research and development with adequate publication/patenting and/or academic credentials in data science.
- You have the ability to directly connect business problems to research problems along with the latest emerging technologies.
Strategic Responsibility:
- To perform the following: understanding problem statements, connecting the dots between high-level business statements and deep technology algorithms, crafting new systems and methods in the space of structured data mining, natural language processing, computer vision, speech technologies, robotics or Internet of things etc.
- To be responsible for end-to-end production level coding with data science and machine learning algorithms, unit and integration testing, deployment, optimization and fine-tuning of models on cloud, desktop, mobile or edge etc.
- To learn in a continuous mode, upgrade and upskill along with publishing novel articles in journals and conference proceedings and/or filing patents, and be involved in evangelism activities and ecosystem development etc.
- To share knowledge, mentor colleagues, partners, and customers, take sessions on artificial intelligence topics both online or in-person, participate in workshops, conferences, seminars/webinars as a speaker, instructor, demonstrator or jury member etc.
- To design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance.
- To collaborate within the product streams and team to bring best practices and leverage world-class tech stack.
- To set up every essentials (tracking / alerting) to make sure the infrastructure / software built is working as expected.
- To search, collect and clean Data for analysis and setting up efficient storage and retrieval pipelines.
Personality:
- Requires excellent communication skills – written, verbal, and presentation.
- You should be a team player.
- You should be positive towards problem-solving and have a very structured thought process to solve problems.
- You should be agile enough to learn new technology if needed.
Qualifications:
- B Tech / BS / BE / M Tech / MS / ME in CS or equivalent from Tier I / II or Top Tier Engineering Colleges and Universities.
- 6+ years of strong software (application or infrastructure) development experience and software engineering skills (Python, R, C, C++ / Java / Scala / Golang).
- Deep expertise and practical knowledge of operating systems, MySQL and NoSQL databases(Redis/couchbase/mongodb/ES or any graphDB).
- Good understanding of Machine Learning Algorithms, Linear Algebra and Statistics.
- Working knowledge of Amazon Web Services(AWS).
- Experience with Docker and Kubernetes will be a plus.
- Experience with Natural Language Processing, Recommendation Systems, or Search Engines.
Our Culture
As an organization, it’s our priority to create a highly engaging and rewarding workplace. We offer tons of awesome perks, great learning opportunities & growth.
Our culture reflects the globally diverse backgrounds of our employees along with our commitment to our customers, each other, and a passion for excellence.
To know more about us, feel free to go through these videos:
1. Sales Readiness Explained: https://www.youtube.com/watch?v=XyMJj9AlNww&t=6s
2. What We Do: https://www.youtube.com/watch?v=jv3Q2XgnkBY
3. Ready to Close More Deals, Faster: https://www.youtube.com/watch?v=nB0exreVU-s
To view more videos, please access the below-mentioned link:
https://www.youtube.com/c/mindtickle/videos
Mindtickle is proud to be an Equal Opportunity Employer
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work - In compliance with applicable laws, all persons hired will be required to verify identity and eligibility to work in the respective work locations and to complete the required employment eligibility verification document form upon hire.
- Seeking an Individual carrying around 5+ yrs of experience.
- Must have skills - Jenkins, Groovy, Ansible, Shell Scripting, Python, Linux Admin
- Terraform, AWS deep knowledge to automate and provision EC2, EBS, SQL Server, cost optimization, CI/CD pipeline using Jenkins, Server less automation is plus.
- Excellent writing and communication skills in English. Enjoy writing crisp and understandable documentation
- Comfortable programming in one or more scripting languages
- Enjoys tinkering with tooling. Find easier ways to handle systems by doing some research. Strong awareness around build vs buy.
Experience: 4-8 years
Notice Period: 15-30 days
Mandatory Skill Set:
Front End: ReactJS / Javascript / CSS / jQuery / Bootstrap
Backend: Python /Django/ Flask / Tornado
Responsibilities :
- Responsible for design and architecture of functional prototypes and production ready systems
- Uses open source frameworks as appropriate. Django Preferred.
- Develops Python and JavaScript code as necessary.
- Co-ordinating with team lead / product team and contributing to business requirements in terms of code.
- Write Rest APIs and documentation to support consumption of these APIs.
- Communicate technical concepts with trade offs, risks, and benefits.
- Evaluate and resolve product related issues.
Requirements :
- Demonstrable experience writing clean, thoughtful and business oriented
- Strong understanding of JavaScript, HTML, and CSS3. Knowledge of ReactJS and Redux is a plus.
- Good understanding of REST API's and experience in building them. Knowledge of Django Rest Framework is a plus.
- Experience on asynchronous request handling, partial page updates, and AJAX.
- Proficient understanding of cross browser compatibility issues and ways to work around such issues
- Proficient understanding of code versioning tools, such as Git / Mercurial / SVN
- Proactive in terms of sharing updates across the entire team.
About Us -Celebal Technologies is a premier software services company in the field of Data Science, Big Data and Enterprise Cloud. Celebal Technologies helps you to discover the competitive advantage by employing intelligent data solutions using cutting-edge technology solutions that can bring massive value to your organization. The core offerings are around "Data to Intelligence", wherein we leverage data to extract intelligence and patterns thereby facilitating smarter and quicker decision making for clients. With Celebal Technologies, who understands the core value of modern analytics over the enterprise, we help the business in improving business intelligence and more data-driven in architecting solutions.
Key Responsibilities
• As a part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team city, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
• Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively.
Fintech Leader, building a product on data Science
Data Scientist-
We are looking for an experienced Data Scientists to join our engineering team and
help us enhance our mobile application with data. In this role, we're looking for
people who are passionate about developing ML/AI in various domains that solves
enterprise problems. We are keen on hiring someone who loves working in fast paced start-up environment and looking to solve some challenging engineering
problems.
As one of the earliest members in engineering, you will have the flexibility to design
the models and architecture from ground up. As any early-stage start-up, we expect
you to be comfortable wearing various hats, and be proactive contributor in building
something truly remarkable.
Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases like MySQL, Oracle, SQL Server, NoSQL, etc. is
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
1. Should have worked in Agile methodology and microservices architecture
2. Should have 7+ years of experience in Python and Django framework
3. Should have a good knowledge of DRF
4. Should have knowledge of User Auth (JWT, OAuth2), API Auth, Access Control List, etc.
5. Should have working experience in session management in Django
6. Should have expertise in the Django MVC and uses of templates in frontend
7. Should have working experience in PostgreSQL
8. Should have working experience in the RabbitMQ messaging channel and Celery Analytics
9. Good to have javascript implementation knowledge in Django templates
About us:
Arista Networks was founded to pioneer and deliver software driven cloud networking solutions for large datacenter storage and computing environments. Arista's award-winning platforms, ranging in Ethernet speeds from 10 to 400 gigabits per second, redefine scalability, agility and resilience. Arista has shipped more than 20 million cloud networking ports worldwide with CloudVision and EOS, an advanced network operating system. Committed to open standards, Arista is a founding member of the 25/50GbE consortium. Arista Networks products are available worldwide directly and through partners.
About the job
Arista Networks is looking for world-class software engineers to join our Extensible Operating System (EOS) software development team.As a core member of the EOS team, you will be part of a fast-paced,high caliber team-building features to run the world's largest data center networks.Your software will be a key component of Arista's EOS, Arista's unique, Linux-based network operating system that runs on all of Arista's data center networking products.
The EOS team is responsible for all aspects of the development and delivery of software meant to run on the various Arista switches.You will work with your fellow engineers and members of the marketing team to gather and understand the functional and technical requirements for upcoming projects.You will help write functional specifications, design specifications, test plans, and the code to bring all of these to life.You will also work with customers to triage and fix problems in their networks. Internally, you will develop automated tests for your software, monitor the execution of those tests, and triage and fix problems found by your tests.At Arista, you will own your projects from definition to deployment, and you will be responsible for the quality of everything you deliver.
This role demands strong and broad software engineering fundamentals, and a good understanding of networking including capabilities like L2, L3, and fundamentals of commercial switching HW.Your role will not be limited to a single aspect of EOS at Arista, but cover all aspects of EOS.
Responsibilities:
- Write functional specifications and design specifications for features related to forwarding traffic on the internet and cloud data centers.
- Independently implement solutions to small-sized problems in our EOS software, using the C, C++, and python programming languages.
- Write test plan specifications for small-sized features in EOS, and implement automated test programs to execute the cases described in the test plan.
- Debug problems found by our automated test programs and fix the problems.
- Work on a team implementing, testing, and debugging solutions to larger routing protocol problems.
- Work with Customer Support Engineers to analyze problems in customer networks and provide fixes for those problems when needed in the form of new software releases or software patches.
- Work with the System Test Engineers to analyze problems found in their tests and provide fixes for those problems.
- Mentor new and junior engineers to bring them up to speed in Arista’s software development environment.
- Review and contribute to the specifications and implementations written by other team members.
- Help to create a schedule for the implementation and debugging tasks, update that schedule weekly, and report it to the project lead.
Qualifications:
- BS Computer Science/Electrical Engineering/Computer Engineering 3-10 years experience, or MS Computer Science/Electrical Engineering/Computer Engineering + 5 years experience, Ph.D. in Computer Science/Electrical Engineering/Computer Engineering, or equivalent work experience.
- Knowledge of C, C++, and/or python.
- Knowledge of UNIX or Linux.
- Understanding of L2/L3 networking including at least one of the following areas is desirable:
- IP routing protocols, such as RIP, OSPF, BGP, IS-IS, or PIM.
- Layer 2 features such as 802.1d bridging, the 802.1d Spanning Tree Protocol, the 802.1ax Link Aggregation Control Protocol, the 802.1AB Link Layer Discovery Protocol, or RFC 1812 IP routing.
- Ability to utilize, test, and debug packet forwarding engine and a hardware component’s vendor provided software libraries in your solutions.
- Infrastructure functions related to distributed systems such as messaging, signalling, databases, and command line interface techniques.
- Hands on experience in the design and development of ethernet bridging or routing related software or distributed systems software is desirable.
- Hands on experience with enterprise or service provider class Ethernet switch/router system software development, or significant PhD level research in the area of network routing and packet forwarding.
- Applied understanding of software engineering principles.
- Strong problem solving and software troubleshooting skills.
- Ability to design a solution to a small-sized problem, and implement that solution without outside help.Able to work on a small team solving a medium-sized problem with limited oversight.
Resources:
- Arista's Approach to Software with Ken Duda (CTO): https://youtu.be/TU8yNh5JCyw
- Additional information and resources can be found at https://www.arista.com/en/
Avegen is a digital healthcare company empowering individuals to take control of their health and supporting healthcare professionals in delivering life-changing care. Avegen’s core product, HealthMachine®, is a cloud-hosted, next-generation digital healthcare engine for pioneers in digital healthcare, including healthcare providers and pharmaceutical companies, to deploy high-quality robust digital care solutions efficiently and effectively. We are ISO27001, ISO13485, and Cyber essentials certified; compliant with NHS Data protection toolkit and GDPR.
Job Summary:
We are looking for a Mobile Automation Tester who is passionate in Mobile App Automation and works in one or more mobile automation frameworks.
Roles and Responsibilities :
- Write, design, and execute automated tests by creating scripts that run testing functions automatically.
- Build test automation frameworks.
- Work in an agile development environment where developers and testers work closely together to ensure requirements are met.
- Design, document, manage and execute test cases, sets, and suites.
- Work in cross-functional project teams that include Development, Marketing, Usability, Software Quality Assurance, Customer Learning, and Support.
- Review test cases and automate whenever possible. -Educate team members on test automation and drive adoption.
- Integrate automated test cases into nightly build systems.
Required Skills:
- Previous experience working as a QA automation engineer.
- Experience in Mobile Testing. IOS automation and Android automation.
- Hands-on experience in any programming language like Java, python, javascript, Ruby, C#.
- Experience & knowledge of tools like JIRA, Selenium , Postman, Web and App test automation.
- Ability to deliver results under pressure.
- Self-development skills to keep up to date with fast-changing trends.
Good to Have Skills:
- Experience working with CI/CD pipelines like (Jenkins, Circle CI).
- API, DB Automation.
- Excellent scripting experience.
Educational Qualifications:
● Candidates with Bachelor / Master's degree would be preferred
Job Responsibilities:
Support, maintain, and enhance existing and new product functionality for trading software in a real-time, multi-threaded, multi-tier server architecture environment to create high and low level design for concurrent high throughput, low latency software architecture.
- Provide software development plans that meet future needs of clients and markets
- Evolve the new software platform and architecture by introducing new components and integrating them with existing ones
- Perform memory, cpu and resource management
- Analyze stack traces, memory profiles and production incident reports from traders and support teams
- Propose fixes, and enhancements to existing trading systems
- Adhere to release and sprint planning with the Quality Assurance Group and Project Management
- Work on a team building new solutions based on requirements and features
- Attend and participate in daily scrum meetings
Required Skills:
- JavaScript and Python
- Multi-threaded browser and server applications
- Amazon Web Services (AWS)
- REST