50+ Python Jobs in Pune | Python Job openings in Pune
Apply to 50+ Python Jobs in Pune on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Greetings , Wissen Technology is Hiring for the position of Data Engineer
Please find the Job Description for your Reference:
JD
- Design, develop, and maintain data pipelines on AWS EMR (Elastic MapReduce) to support data processing and analytics.
- Implement data ingestion processes from various sources including APIs, databases, and flat files.
- Optimize and tune big data workflows for performance and scalability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Manage and monitor EMR clusters, ensuring high availability and reliability.
- Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and store data in data lakes and data warehouses.
- Implement data security best practices to ensure data is protected and compliant with relevant regulations.
- Create and maintain technical documentation related to data pipelines, workflows, and infrastructure.
- Troubleshoot and resolve issues related to data processing and EMR cluster performance.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in data engineering, with a focus on big data technologies.
- Strong experience with AWS services, particularly EMR, S3, Redshift, Lambda, and Glue.
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with big data frameworks and tools such as Hadoop, Spark, Hive, and Pig.
- Solid understanding of data modeling, ETL processes, and data warehousing concepts.
- Experience with SQL and NoSQL databases.
- Familiarity with CI/CD pipelines and version control systems (e.g., Git).
- Strong problem-solving skills and the ability to work independently and collaboratively in a team environment
at Sarvaha Systems Private Limited
Sarvaha would like to welcome talented Software Development Engineer in Test (SDET) with minimum 5 years of experience to join our team. As an SDET, you will champion the quality of the product and will design, develop, and maintain modular, extensible, and reusable test cases/scripts. This is a hands-on role which requires you to work with automation test developers and application developers to enhance the quality of the products and development practices. Please visit our website at http://www.sarvaha.com to know more about us.
Key Responsibilities
- Understand requirements through specification or exploratory testing, estimate QA efforts, design test strategy, develop optimal test cases, maintain RTM
- Design, develop & maintain a scalable test automation framework
- Build interfaces to seamlessly integrate testing with development environments.
- Create & manage test setups that prioritize scalability, remote accessibility and reliability.
- Automate test scripts, create and execute relevant test suites, analyze test results and enhance existing or build newer scripts for coverage. Communicate with stakeholders for requirements, troubleshooting etc; provide visibility into the works by sharing relevant reports and metrics
- Stay up-to-date with industry best practices in testing methodologies and technologies to advise QA and integration teams.
Skills Required
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field (Software Engineering preferred).
- Minimum 5+ years of experience in testing enterprise-grade/highly scalable, distributed applications, products, and services.
- Expertise in manual and Automation testing with excellent understanding of test methodologies and test design techniques, test life cycle.
- Strong programming skills in Typescript and Python, with experience using Playwright for building hybrid/BDD frameworks for Website and API automation
- Very good problem-solving and analytical skills.
- Experience in databases, both SQL and No-SQL.
- Practical experience in setting up CI/CD pipelines (ideally with Jenkins).
- Exposure to Docker, Kubernetes and EKS is highly desired.
- C# experience is an added advantage.
- A continuous learning mindset and a passion for exploring new technologies.
- Excellent communication, collaboration, quick learning of needed language/scripting and influencing skills.
Position Benefits
- Competitive salary and excellent growth opportunities within a dynamic team.
- Positive and collaborative work environment with the opportunity to learn from talented colleagues.
- Highly challenging and rewarding software development problems to solve.
- Hybrid work model with established remote work options.
Job Description:
Ideal experience required – 6-8 years
- Mandatory hands-on experience in .Net Core (8)
- Mandatory hands-on experience in Angular (10+ version required)
- Azure and Microservice Architecture experience is good to have.
- No database or domain constraint
Skills:
- 7 to 10 years of working experience in managing .net projects closely with internal and external clients in structured contexts in an international environment.
- Strong knowledge of .Net Core, .NET MVC, C#, SQL Server & JavaScript
- Working experience in Angular
- Familiar with various design and architectural patterns
- Should be familiar with Git source code management for code repository.
- Should be able to write clean, readable, and easily maintainable code.
- Understanding of fundamental design principles for building a scalable application
- Experience in implementing automated testing platforms and unit test.
Nice to have:
- AWS
- Elastic Search
- Mongo DB
Responsibilities:
- Should be able to handle modules/project independently with minor supervision.
- Should be good in troubleshooting and problem-solving skills.
- Should be able to take complete ownership of modules and projects.
- Should be able to communicate and coordinate with multiple teams.
- Must have good verbal & written communication skill.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
We're seeking an experienced Backend Software Engineer to join our team.
As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop.
This includes APIs, databases, and server-side logic.
Responsibilities
- Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
- Write clean, efficient, and well-documented code that adheres to industry standards and best practices
- Participate in code reviews and contribute to the improvement of the codebase
- Debug and resolve issues in the existing codebase
- Develop and execute unit tests to ensure high code quality
- Work with DevOps engineers to ensure seamless deployment of software changes
- Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
- Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
- Collaborate with cross-functional teams to identify and prioritize project requirements
Requirements
- At least 3+ years of experience building scalable and reliable backend systems
- Strong proficiency in either of the programming languages such as Python, Node.js, Golang, RoR
- Experience with either of the frameworks such as Django, Express, gRPC
- Knowledge of database systems such as MySQL, PostgreSQL, MongoDB, Cassandra, or Redis
- Familiarity with containerization technologies such as Docker and Kubernetes
- Understanding of software development methodologies such as Agile and Scrum
- Ability to demonstrate flexibility wrt picking a new technology stack and ramping up on the same fairly quickly
- Bachelor's/Master's degree in Computer Science or related field
- Strong problem-solving skills and ability to collaborate effectively with cross-functional teams
- Good written and verbal communication skills in English
Lean provides developers with a universal API to access their customers' financial accounts across the Middle East. We recognized that infrastructure barriers were hindering fintech growth in our home markets and set out to build a solution. With Lean, developers at any level can now create advanced financial solutions without grappling with infrastructure complexities, allowing them to focus squarely on customer needs.
Why Join Us?
Our products have garnered enthusiastic feedback from both developers and customers. With Sequoia leading our $33 million Series A round, our debut in the GCC marks just the beginning. We're committed to expanding regionally and enhancing stakeholder value. If you thrive on solving challenges and making a lasting impact, Lean is the place for you.
We offer competitive salaries, private healthcare, flexible office hours, and ensure every team member holds meaningful equity. Join us on our journey of enabling the next wave of financial innovation!
We are seeking an experienced Test Engineer to join our team in the MENA region. As a Software Development Engineer in Test (SDET), you will ensure the reliability and high performance of our open banking systems through both functional and non-functional testing. This is a fantastic opportunity for an SDET to learn from and be mentored by an experienced team based in Pune and Dubai.
Your duties will include creating and executing test plans, contributing to our in-house automation framework and tools, analyzing test results, and communicating with stakeholders to ensure the delivery of high-quality products. As a key member of the team, you will meticulously analyze test results, identify potential issues, and collaborate effectively with stakeholders to guarantee the delivery of resilient products.
Responsibilities:
- Maintain and adapt automation frameworks for open banking solutions.
- Work closely with all stakeholders to understand the full context of deliveries and translate complex functional and non-functional requirements.
- Ensure all Quality Guideline requirements are understood by the team and met during the development cycle.
- Monitor and report on test results, identifying potential issues.
- Take initiative in continuous self-learning and skill development.
Requirements:
- Bachelor’s degree in Computer Science, Software Engineering, or a related field.
- At least 5 to 8 years of experience in the testing domain, with a focus on automation.
- Proven experience as an automation engineer or SDET, with a track record of building and maintaining scalable and reliable test frameworks.
- Strong programming skills in languages such as JavaScript, TypeScript, and Python.
- Experience with automated testing tools and frameworks (e.g., WebdriverIO, Selenium, or similar tools).
- A passion for testing and the ability to bring new ideas, along with the capability to work independently.
- Ability to analyze test results and collaborate with development teams to optimize testing practices.
- Monitor and report on quality metrics, identifying potential performance issues.
- Good to have experience with CI/CD tools like Jenkins.
- Excellent verbal, written, and interpersonal skills.
- Knowledge of Grafana, Kibana, and performance testing is an added advantage.
- Ability to adapt, communicate effectively, and deliver results independently.
Join our team and help us deliver high-quality open banking solutions in the MENA region!
Preferred Skills:
- Experience with XML-based web services (SOAP, REST).
- Knowledge of database technologies (SQL, NoSQL) for XML data storage.
- Familiarity with version control systems (Git, SVN).
- Understanding of JSON and other data interchange formats.
- Certifications in XML technologies are a plus.
Company Description
AdElement is a leading digital advertising technology company that has been helping app publishers increase their ad revenue and reach untapped demand since 2011. With our expertise in connecting brands to app audiences on evolving screens, such as VR headsets and vehicle consoles, we enable our clients to be first to market. We have been recognized as the Google Agency of the Year and have offices globally, with our headquarters located in New Brunswick, New Jersey.
Job Description
Work alongside a highly skilled engineering team to design, develop, and maintain large-scale, highly performant, real-time applications.
Own building features, driving directly with product and other engineering teams.
Demonstrate excellent communication skills in working with technical and non-technical audiences.
Be an evangelist for best practices across all functions - developers, QA, and infrastructure/ops.
Be an evangelist for platform innovation and reuse.
Requirements:
2+ years of experience building large-scale and low-latency distributed systems.
Command of Java or C++.
Solid understanding of algorithms, data structures, performance optimization techniques, object-oriented programming, multi-threading, and real-time programming.
Experience with distributed caching, SQL/NO SQL, and other databases is a plus.
Experience with Big Data and cloud services such as AWS/GCP is a plus.
Experience in the advertising domain is a big plus.
B. S. or M. S. degree in Computer Science, Engineering, or equivalent.
Location: Pune, Maharashtra.
Sr. Data Engineer (Data Warehouse-Snowflake)
Experience: 5+yrs
Location: Pune (Hybrid)
As a Senior Data engineer with Snowflake expertise you are a subject matter expert who is curious and an innovative thinker to mentor young professionals. You are a key person to convert Vision and Data Strategy for Data solutions and deliver them. With your knowledge you will help create data-driven thinking within the organization, not just within Data teams, but also in the wider stakeholder community.
Skills Preferred
- Advanced written, verbal, and analytic skills, and demonstrated ability to influence and facilitate sustained change. Ability to convey information clearly and concisely to all levels of staff and management about programs, services, best practices, strategies, and organizational mission and values.
- Proven ability to focus on priorities, strategies, and vision.
- Very Good understanding in Data Foundation initiatives, like Data Modelling, Data Quality Management, Data Governance, Data Maturity Assessments and Data Strategy in support of the key business stakeholders.
- Actively deliver the roll-out and embedding of Data Foundation initiatives in support of the key business programs advising on the technology and using leading market standard tools.
- Coordinate the change management process, incident management and problem management process.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Drive implementation efficiency and effectiveness across the pilots and future projects to minimize cost, increase speed of implementation and maximize value delivery
Knowledge Preferred
- Extensive knowledge and hands on experience with Snowflake and its different components like User/Group, Data Store/ Warehouse management, External Stage/table, working with semi structured data, Snowpipe etc.
- Implement and manage CI/CD for migrating and deploying codes to higher environments with Snowflake codes.
- Proven experience with Snowflake Access control and authentication, data security, data sharing, working with VS Code extension for snowflake, replication, and failover, optimizing SQL, analytical ability to troubleshoot and debug on development and production issues quickly is key for success in this role.
- Proven technology champion in working with relational, Data warehouses databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Highly Experienced in building and optimizing complex queries. Good with manipulating, processing, and extracting value from large, disconnected datasets.
- Your experience in handling big data sets and big data technologies will be an asset.
- Proven champion with in-depth knowledge of any one of the scripting languages: Python, SQL, Pyspark.
Primary responsibilities
- You will be an asset in our team bringing deep technical skills and capabilities to become a key part of projects defining the data journey in our company, keen to engage, network and innovate in collaboration with company wide teams.
- Collaborate with the data and analytics team to develop and maintain a data model and data governance infrastructure using a range of different storage technologies that enables optimal data storage and sharing using advanced methods.
- Support the development of processes and standards for data mining, data modeling and data protection.
- Design and implement continuous process improvements for automating manual processes and optimizing data delivery.
- Assess and report on the unique data needs of key stakeholders and troubleshoot any data-related technical issues through to resolution.
- Work to improve data models that support business intelligence tools, improve data accessibility and foster data-driven decision making.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Manage and lead technical design and development activities for implementation of large-scale data solutions in Snowflake to support multiple use cases (transformation, reporting and analytics, data monetization, etc.).
- Translate advanced business data, integration and analytics problems into technical approaches that yield actionable recommendations, across multiple, diverse domains; communicate results and educate others through design and build of insightful presentations.
- Exhibit strong knowledge of the Snowflake ecosystem and can clearly articulate the value proposition of cloud modernization/transformation to a wide range of stakeholders.
Relevant work experience
Bachelors in a Science, Technology, Engineering, Mathematics or Computer Science discipline or equivalent with 7+ Years of experience in enterprise-wide data warehousing, governance, policies, procedures, and implementation.
Aptitude for working with data, interpreting results, business intelligence and analytic best practices.
Business understanding
Good knowledge and understanding of Consumer and industrial products sector and IoT.
Good functional understanding of solutions supporting business processes.
Skill Must have
- Snowflake 5+ years
- Overall different Data warehousing techs 5+ years
- SQL 5+ years
- Data warehouse designing experience 3+ years
- Experience with cloud and on-prem hybrid models in data architecture
- Knowledge of Data Governance and strong understanding of data lineage and data quality
- Programming & Scripting: Python, Pyspark
- Database technologies such as Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL)
Nice to have
- Demonstrated experience in modern enterprise data integration platforms such as Informatica
- AWS cloud services: S3, Lambda, Glue and Kinesis and API Gateway, EC2, EMR, RDS, Redshift and Kinesis
- Good understanding of Data Architecture approaches
- Experience in designing and building streaming data ingestion, analysis and processing pipelines using Kafka, Kafka Streams, Spark Streaming, Stream sets and similar cloud native technologies.
- Experience with implementation of operations concerns for a data platform such as monitoring, security, and scalability
- Experience working in DevOps, Agile, Scrum, Continuous Delivery and/or Rapid Application Development environments
- Building mock and proof-of-concepts across different capabilities/tool sets exposure
- Experience working with structured, semi-structured, and unstructured data, extracting information, and identifying linkages across disparate data sets
Primary Skills
DynamoDB, Java, Kafka, Spark, Amazon Redshift, AWS Lake Formation, AWS Glue, Python
Skills:
Good work experience showing growth as a Data Engineer.
Hands On programming experience
Implementation Experience on Kafka, Kinesis, Spark, AWS Glue, AWS Lake Formation.
Excellent knowledge in: Python, Scala/Java, Spark, AWS (Lambda, Step Functions, Dynamodb, EMR), Terraform, UI (Angular), Git, Mavena
Experience of performance optimization in Batch and Real time processing applications
Expertise in Data Governance and Data Security Implementation
Good hands-on design and programming skills building reusable tools and products Experience developing in AWS or similar cloud platforms. Preferred:, ECS, EKS, S3, EMR, DynamoDB, Aurora, Redshift, Quick Sight or similar.
Familiarity with systems with very high volume of transactions, micro service design, or data processing pipelines (Spark).
Knowledge and hands-on experience with server less technologies such as Lambda, MSK, MWAA, Kinesis Analytics a plus.
Expertise in practices like Agile, Peer reviews, Continuous Integration
Roles and responsibilities:
Determining project requirements and developing work schedules for the team.
Delegating tasks and achieving daily, weekly, and monthly goals.
Responsible for designing, building, testing, and deploying the software releases.
Salary: 25LPA-40LPA
Job Description:
· Proficient In Python.
· Good knowledge of Stress/Load Testing and Performance Testing.
· Knowledge in Linux.
About Us
Sahaj Software is an artisanal software engineering firm built on the values of trust, respect, curiosity, and craftsmanship, and delivering purpose-built solutions to drive data-led transformation for organisations. Our emphasis is on craft as we create purpose-built solutions, leveraging Data Engineering, Platform Engineering and Data Science with a razor-sharp focus to solve complex business and technology challenges and provide customers with a competitive edge
About The Role
As a Data Engineer, you’ll feel at home if you are hands-on, grounded, opinionated and passionate about delivering comprehensive data solutions that align with modern data architecture approaches. Your work will range from building a full data platform to building data pipelines or helping with data architecture and strategy. This role is ideal for those looking to have a large impact and huge scope for growth, while still being hands-on with technology. We aim to allow growth without becoming “post-technical”.
Responsibilities
- Collaborate with Data Scientists and Engineers to deliver production-quality AI and Machine Learning systems
- Build frameworks and supporting tooling for data ingestion from a complex variety of sources
- Consult with our clients on data strategy, modernising their data infrastructure, architecture and technology
- Model their data for increased visibility and performance
- You will be given ownership of your work, and are encouraged to propose alternatives and make a case for doing things differently; our clients trust us and we manage ourselves.
- You will work in short sprints to deliver working software
- You will be working with other data engineers in Sahaj and work on building Data Engineering capability across the organisation
You can read more about what we do and how we think here: https://sahaj.ai/client-stories/
Skills you’ll need
- Demonstrated experience as a Senior Data Engineer in complex enterprise environments
- Deep understanding of technology fundamentals and experience with languages like Python, or functional programming languages like Scala
- Demonstrated experience in the design and development of big data applications using tech stacks like Databricks, Apache Spark, HDFS, HBase and Snowflake
- Commendable skills in building data products, by integrating large sets of data from hundreds of internal and external sources would be highly critical
- A nuanced understanding of code quality, maintainability and practices like Test Driven Development
- Ability to deliver an application end to end; having an opinion on how your code should be built, packaged and deployed using CI/CD
- Understanding of Cloud platforms, DevOps, GitOps, and Containers
What will you experience as a culture at Sahaj?
At Sahaj, people's collective stands for a shared purpose where everyone owns the dreams, ideas, ideologies, successes, and failures of the organisation - a synergy that is rooted in the ethos of honesty, respect, trust, and equitability. At Sahaj, you will experience
- Creativity
- Ownership
- Curiosity
- Craftsmanship
- A culture of trust, respect and transparency
- Opportunity to collaborate with some of the finest minds in the industry
- Work across multiple domains
What are the benefits of being at Sahaj?
- Unlimited leaves
- Life Insurance & Private Health insurance paid by Sahaj
- Stock options
- No hierarchy
- Open Salaries
We are looking for QA role who has experience into Python ,AWS,and chaos engineering tool(Monkey,Gremlin)
⦁ Strong understanding of distributed systems
- Cloud computing (AWS), and networking principles.
- Ability to understand complex trading systems and prepare and execute plans to induce failures
- Python.
- Experience with chaos engineering tooling such as Chaos Monkey, Gremlin, or similar
Domain: - Investment Banking or Electronic Trading is mandatory
- Develop (Python/Py test) automation tests in all components (e.g. API testing, client-server testing, E2E testing etc.) to meet product requirements and customer usages
- Hands-On experience in Python
- Proficiency in test automation frameworks and tools such as Selenium, Cucumber.
- Experience working in a Microsoft Windows and Linux environment
- Experience using Postman and automated API testing
- Experience designing & executing load/stress and performance testing
- Experience using test cases & test execution management tools and issues management tools (e.g Jira), and development environments (like Visual Studio, IntelliJ, or Eclipse).
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
A modern work platform means a single source of truth for your desk and deskless employees alike, where everything they need is organized and easy to find.
MangoApps was designed to unify your employee experience by combining intranet, communication, collaboration, and training into one intuitive, mobile-accessible workspace.
We are looking for a highly capable machine learning engineer to optimize our machine learning systems. You will be evaluating existing machine learning (ML) processes, performing statistical analysis to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in a related ML role. A machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software.
AI/ML Engineer Responsibilities:
- Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models.
- Transforming data science prototypes and applying appropriate ML algorithms and tools.
- Ensuring that algorithms generate accurate user recommendations.
- Turning unstructured data into useful information by auto-tagging images and text-to-speech conversions.
- Solving complex problems with multi-layered data sets, as well as optimizing existing machine learning libraries and frameworks.
- Developing ML algorithms to huge volumes of historical data to make predictions.
- Running tests, performing statistical analysis, and interpreting test results.
- Documenting machine learning processes.
- Keeping abreast of developments in machine learning.
AI/ML Engineer Requirements:
- Bachelor's degree in computer science, data science, mathematics, or a related field with at least 3+yrs of experience as an AI/ML Engineer
- Advanced proficiency with Python and FastAPI framework along with good exposure to libraries like scikit-learn, Pandas, NumPy etc..
- Experience in working on ChatGPT, LangChain (Must), Large Language Models (Good to have) & Knowledge Graphs
- Extensive knowledge of ML frameworks, libraries, data structures, data modelling, and software architecture.
- In-depth knowledge of mathematics, statistics, and algorithms.
- Superb analytical and problem-solving abilities.
- Great communication and collaboration skills.
Why work with us
- We take delight in what we do, and it shows in the products we offer and ratings of our products by leading industry analysts like IDC, Forrester and Gartner OR independent sites like Capterra.
- Be part of the team that has a great product-market fit, solving some of the most relevant communication and collaboration challenges faced by big and small organizations across the globe.
- MangoApps is highly collaborative place and careers at MangoApps come with a lot of growth and learning opportunities. If you’re looking to make an impact, MangoApps is the place for you.
- We focus on getting things done and know how to have fun while we do them. We have a team that brings creativity, energy, and excellence to every engagement.
- A workplace that was listed as one of the top 51 Dream Companies to work for by World HRD Congress in 2019.
- As a group, we are flat and treat everyone the same.
Benefits
We are a young organization and growing fast. Along with the fantastic workplace culture that helps you meet your career aspirations; we provide some comprehensive benefits.
1. Comprehensive Health Insurance for Family (Including Parents) with no riders attached.
2. Accident Insurance for each employee.
3. Sponsored Trainings, Courses and Nano Degrees.
About You
· Self-motivated: You can work with a minimum of supervision and be capable of strategically prioritizing multiple tasks in a proactive manner.
· Driven: You are a driven team player, collaborator, and relationship builder whose infectious can-do attitude inspires others and encourages great performance in a fast-moving environment.
· Entrepreneurial: You thrive in a fast-paced, changing environment and you’re excited by the chance to play a large role.
· Passionate: You must be passionate about online collaboration and ensuring our clients are successful; we love seeing hunger and ambition.
· Thrive in a start-up mentality with a “whatever it takes” attitude.
About DeepIntent:
DeepIntent is a marketing technology company that helps healthcare brands strengthen communication with patients and healthcare professionals by enabling highly effective and performant digital advertising campaigns. Our healthcare technology platform, MarketMatch™, connects advertisers, data providers, and publishers to operate the first unified, programmatic marketplace for healthcare marketers. The platform’s built-in identity solution matches digital IDs with clinical, behavioural, and contextual data in real-time so marketers can qualify 1.6M+ verified HCPs and 225M+ patients to find their most clinically-relevant audiences and message them on a one-to-one basis in a privacy-compliant way. Healthcare marketers use MarketMatch to plan, activate, and measure digital campaigns in ways that best suit their business, from managed service engagements to technical integration or self-service solutions. DeepIntent was founded by Memorial Sloan Kettering alumni in 2016 and acquired by Propel Media, Inc. in 2017. We proudly serve major pharmaceutical and Fortune 500 companies out of our offices in New York, Bosnia and India.
What You’ll Do:
- Establish formal data practice for the organisation.
- Build & operate scalable and robust data architectures.
- Create pipelines for the self-service introduction and usage of new data
- Implement DataOps practices
- Design, Develop, and operate Data Pipelines which support Data scientists and machine learning
- Engineers.
- Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy
- to deploy and manage.
- Collaborate with various business stakeholders, software engineers, machine learning
- engineers, and analysts.
Who You Are:
- Experience in designing, developing and operating configurable Data pipelines serving high
- volume and velocity data.
- Experience working with public clouds like GCP/AWS.
- Good understanding of software engineering, DataOps, data architecture, Agile and
- DevOps methodologies.
- Experience building Data architectures that optimize performance and cost, whether the
- components are prepackaged or homegrown
- Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash
- Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow
- etc. and big data databases like BigQuery, Clickhouse, etc
- Good communication skills with the ability to collaborate with both technical and non-technical
- people.
- Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Dear Connections,
We are hiring! Join our dynamic team as a QA Automation Tester (Python, Java, Selenium, API, SQL, Git)! We're seeking a passionate professional to contribute to our innovative projects. If you thrive in a collaborative environment, possess expertise in Python, Java, Selenium, and Robot Framework, and are ready to make an impact, apply now! Wissen Technology is committed to fostering innovation, growth, and collaboration. Don't miss this chance to be part of something extraordinary.
Company Overview:
Wissen is the preferred technology partner for executing transformational projects and accelerating implementation through thought leadership and a solution mindset. It is a leading IT solutions and consultancy firm dedicated to providing innovative and customized solutions to global clients. We leverage cutting-edge technologies to empower businesses and drive digital transformation.
#jobopportunity #hiringnow #joinourteam #career #wissen #QA #automationtester #robot #apiautomation #sql #java #python #selenium
About Company:
Our client is the industry-leading provider of CRM messaging solutions. As a forward-thinking global company, it continues to innovate and develop cutting-edge solutions that redefine how businesses digitally communicate with their customers. It works with 2500 customers across 190 countries with customers ranging from SMBs to large global enterprises.
About the role:
The Director of Product Management is responsible for overseeing and implementing product development policies, objectives, and initiatives as well as leading research for new products, product enhancements, and product design.
Roles & responsibilities:
- Become a product expert on all company's solutions
- Build and own the product roadmap and timeline.
- Develop and execute a go-to-market strategy that addresses product, pricing, messaging, competitive positioning, product launch and promotion.
- Work with Development leaders to oversee development resources, including managing ROI, timelines, and deliverables.
- Work with the leadership team on driving product strategy, in both new and existing products, to increase overall market share, revenue and customer loyalty.
- Implement and communicate the strategic and technical direction for the department.
- Engage directly with customers to understand market needs and product requirements.
- Develop/implement a suite of Key Performance Indicators (KPI's) to measure product performance including profitability, customer satisfaction metrics, compliance, and delivery efficiency.
- Define and measure value of software solutions to establish and quantify customer ROI.
- Represent the company by visiting customers to solicit feedback on company products and services.
- Monitors and reports progress of projects within agreed upon timeframes.
- Write very high quality BRD, PRDs, Epics and User Stories
- Creates functional strategies and specific objectives as well as develops budgets, policies, and procedures.
- Creates and analyzes financial proposals related to product development and provides supporting content showing allocation of funds to execute these plans.
- Write status updates, iteration delivery and release notes as necessary
- Display a high level of critical thinking in cross-functional process analysis and problem resolution for new and existing products.
- Develop & conduct specialized training on new products launched and raise awareness & application of relevant subject matter.
- Monitor internal processes for efficiency and validity pre & post product launch/changes.
Requirements:
- Excellent communication skills, both verbal and in writing.
- Strong customer focus paired with exceptional presentation skills.
- Skilled at data analytics focused on identifying opportunities, driving insights, and measuring value.
- Strong problem-solving skills.
- Ability to work effectively in a diverse team environment.
- Proven strategic and tactical leadership, motivation, and decision-making skills
Required Education & Experience:
- Bachelor's Degree in Technology related field.
- Experience in working with a geographically diverse development team.
- Strong technical background with the ability to understand and discuss technical concepts.
- Proven experience in Software Development and Product Management.
- 12+ years of experience leading product teams in a fast-paced business environment as Product Leader on Software Platform or SaaS solution.
- Proven ability to lead and influence cross-functional teams.
- Demonstrated success in delivering high-impact products.
Preferred Qualifications
- Transition from software development role to product management.
- Experience building messaging solutions or marketing or support solutions.
- Experience with agile development methodologies.
- Familiarity with design thinking principles.
- Knowledge of relevant technologies and industry trends.
- Strong project management skills.
Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.
We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends.
What will you work on?
- Interface with clients
- Recommend tech stacks
- Define end-to-end logical and cloud-native architectures
- Define APIs
- Integrate with 3rd party systems
- Create architectural solution prototypes
- Hands-on coding, team lead, code reviews, and problem-solving
What Makes You A Great Fit?
- 5+ years of software experience
- Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
- Solid expertise and hands-on experience in Python with Flask or Django
- Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
- Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
- Knowledge of DevOps practices
- Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
- Excellent communication skills, verbal and written
The job is for a full-time position at our https://goo.gl/maps/o67FWr1aedo">Pune (Viman Nagar) office.
(Note: We are working remotely at the moment. However, once the COVID situation improves, the candidate will be expected to work from our office.)
Hiring alert 🚨
Calling all #PythonDevelopers looking for an #ExcitingJobOpportunity 🚀 with one of our #Insurtech clients.
Are you a Junior Python Developer eager to grow your skills in #BackEnd development?
Our company is looking for someone like you to join our dynamic team. If you're passionate about Python and ready to learn from seasoned developers, this role is for you!
📣 About the company
The client is a fast-growing consultancy firm, helping P&C Insurance companies on their digital journey. With offices in Mumbai and New York, they're at the forefront of insurance tech. Plus, they offer a hybrid work culture with flexible timings, typically between 9 to 5, to accommodate your work-life balance.
💡 What you’ll do
📌 Work with other developers.
📌 Implement Python code with assistance from senior developers.
📌 Write effective test cases such as unit tests to ensure it is meeting the software design requirements.
📌 Ensure Python code when executed is efficient and well written.
📌 Refactor old Python code to ensure it follows modern principles.
📌 Liaise with stakeholders to understand the requirements.
📌 Ensure integration can take place with front end systems.
📌 Identify and fix code where bugs have been identified.
🔎 What you’ll need
📌 Minimum 3 years of experience writing AWS Lambda using Python
📌 Knowledge of other AWS services like CloudWatch and API Gateway
📌 Fundamental understanding of Python and its frameworks.
📌 Ability to write simple SQL queries
📌 Familiarity with AWS Lambda deployment
📌 The ability to problem-solve.
📌 Fast learner with an ability to adapt techniques based on requirements.
📌 Knowledge of how to effectively test Python code.
📌 Great communication and collaboration skills.
Full Stack Developer Job Description
Position: Full Stack Developer
Department: Technology/Engineering
Location: Pune
Type: Full Time
Job Overview:
As a Full Stack Developer at Invvy Consultancy & IT Solutions, you will be responsible for both front-end and back-end development, playing a crucial role in designing and implementing user-centric web applications. You will collaborate with cross-functional teams including designers, product managers, and other developers to create seamless, intuitive, and high-performance digital solutions.
Responsibilities:
Front-End Development:
Develop visually appealing and user-friendly front-end interfaces using modern web technologies such as C# Coding, HTML5, CSS3, and JavaScript frameworks (e.g., React, Angular, Vue.js).
Collaborate with UX/UI designers to ensure the best user experience and responsive design across various devices and platforms.
Implement interactive features, animations, and dynamic content to enhance user engagement.
Optimize application performance for speed and scalability.
Back-End Development:
Design, develop, and maintain the back-end architecture using server-side technologies (e.g., Node.js, Python, Ruby on Rails, Java, .NET).
Create and manage databases, including data modeling, querying, and optimization.
Implement APIs and web services to facilitate seamless communication between front-end and back-end systems.
Ensure security and data protection by implementing proper authentication, authorization, and encryption measures.
Collaborate with DevOps teams to deploy and manage applications in cloud environments (e.g., AWS, Azure, Google Cloud).
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
Proven experience as a Full Stack Developer or similar role.
Proficiency in front-end development technologies like HTML5, CSS3, JavaScript, and popular frameworks (React, Angular, Vue.js, etc.).
Strong experience with back-end programming languages and frameworks (Node.js, Python, Ruby on Rails, Java, .NET, etc.).
Familiarity with database systems (SQL and NoSQL) and their integration with web applications.
Knowledge of web security best practices and application performance optimization.
at DeepIntent
Who We Are:
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.
What You’ll Do:
We are looking for a Senior Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams
- Develop and standardized all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data based decisioning
- Optimize queries and data access efficiencies, serve as expert in how to most efficiently attain desired data points
- Build “mastered” versions of the data for Analytics specific querying use cases
- Help with data ETL, table performance optimization
- Establish formal data practice for the Analytics practice in conjunction with rest of DeepIntent
- Build & operate scalable and robust data architectures
- Interpret analytics methodology requirements and apply to data architecture to create standardized queries and operations for use by analytics teams
- Implement DataOps practices
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics specific objectives
- Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts
- Operate between Engineers and Analysts to unify both practices for analytics insight creation
Who You Are:
- Adept in market research methodologies and using data to deliver representative insights
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases
- Deep SQL experience is a must
- Exceptional communication skills with ability to collaborate and translate with between technical and non technical needs
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data
- Experience working with public clouds like GCP/AWS
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies
- Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
- Proficient with SQL,Python or JVM based language, Bash
- Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc.and big data databases like BigQuery, Clickhouse, etc
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious
- Comfortable to work in EST Time Zone
at DeepIntent
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.
What You’ll Do:
We are looking for a talented candidate with several years of experience in software Quality Assurance to join our QA team. This position will be at an individual contributor level as part of a collaborative, fast-paced team. As a member of the QA team, you will work closely with Product Managers and Developers to understand application features and create robust comprehensive test plans, write test cases, and work closely with the developers to make the applications more testable. We are looking for a well-rounded candidate with solid analytical skills, an enthusiasm for taking ownership of features, a strong commitment to quality, and the ability to work closely and communicate effectively with development and other teams. Experience with the following is preferred:
- Python
- Perl
- Shell Scripting
- Selenium
- Test Automation (QA)
- Software Testing (QA)
- Software Development (MUST HAVE)
- SDET (MUST HAVE)
- MySQL
- CI/CD
Who You Are:
- Hands on Experience with QA Automation Framework development & Design (Preferred language Python)
- Strong understanding of testing methodologies
- Scripting
- Strong problem analysis and troubleshooting skills
- Experience in databases, preferably MySQL
- Debugging skills
- REST/API testing experience is a plus
- Integrate end-to-end tests with CI/CD pipelines and monitor and improve metrics around test coverage
- Ability to work in a dynamic and agile development environment and be adaptable to changing requirements
- Performance testing experience with relevant automation and monitoring tools
- Exposure to Dockerization or Virtualization is a plus
- Experience working in the Linux/Unix environment
- Basic understanding of OS
DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.
DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.
DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
The role is with a Fintech Credit Card company based in Pune within the Decision Science team. (OneCard )
About
Credit cards haven't changed much for over half a century so our team of seasoned bankers, technologists, and designers set out to redefine the credit card for you - the consumer. The result is OneCard - a credit card reimagined for the mobile generation. OneCard is India's best metal credit card built with full-stack tech. It is backed by the principles of simplicity, transparency, and giving back control to the user.
The Engineering Challenge
“Re-imaging credit and payments from First Principles”
Payments is an interesting engineering challenge in itself with requirements of low latency, transactional guarantees, security, and high scalability. When we add credit and engagement into the mix, the challenge becomes even more interesting with underwriting and recommendation algorithms working on large data sets. We have eliminated the current call center, sales agent, and SMS-based processes with a mobile app that puts the customers in complete control. To stay agile, the entire stack is built on the cloud with modern technologies.
Purpose of Role :
- Develop and implement the collection analytics and strategy function for the credit cards. Use analysis and customer insights to develop optimum strategy.
CANDIDATE PROFILE :
- Successful candidates will have in-depth knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques. They will be an adept communicator with good interpersonal skills to work with senior stake holders in India to grow revenue primarily through identifying / delivering / creating new, profitable analytics solutions.
We are looking for someone who:
- Proven track record in collection and risk analytics preferably in Indian BFSI industry. This is a must.
- Identify & deliver appropriate analytics solutions
- Experienced in Analytics team management
Essential Duties and Responsibilities :
- Responsible for delivering high quality analytical and value added services
- Responsible for automating insights and proactive actions on them to mitigate collection Risk.
- Work closely with the internal team members to deliver the solution
- Engage Business/Technical Consultants and delivery teams appropriately so that there is a shared understanding and agreement as to deliver proposed solution
- Use analysis and customer insights to develop value propositions for customers
- Maintain and enhance the suite of suitable analytics products.
- Actively seek to share knowledge within the team
- Share findings with peers from other teams and management where required
- Actively contribute to setting best practice processes.
Knowledge, Experience and Qualifications :
Knowledge :
- Good understanding of collection analytics preferably in Retail lending industry.
- Knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques and market trends
- Knowledge of different modelling frameworks like Linear Regression, Logistic Regression, Multiple Regression, LOGIT, PROBIT, time- series modelling, CHAID, CART etc.
- Knowledge of Machine learning & AI algorithms such as Gradient Boost, KNN, etc.
- Understanding of decisioning and portfolio management in banking and financial services would be added advantage
- Understanding of credit bureau would be an added advantage
Experience :
- 4 to 8 years of work experience in core analytics function of a large bank / consulting firm.
- Experience on working on Collection analytics is must
- Experience on handling large data volumes using data analysis tools and generating good data insights
- Demonstrated ability to communicate ideas and analysis results effectively both verbally and in writing to technical and non-technical audiences
- Excellent communication, presentation and writing skills Strong interpersonal skills
- Motivated to meet and exceed stretch targets
- Ability to make the right judgments in the face of complexity and uncertainty
- Excellent relationship and networking skills across our different business and geographies
Qualifications :
- Masters degree in Statistics, Mathematics, Economics, Business Management or Engineering from a reputed college
About UpSolve
We built and deliver complex AI solutions which help drive business decisions faster and more accurately. We are a typical AI company and have a range of solutions developed on Video, Image and Text.
What you will do
- Stay informed on new technologies and implement cautiously
- Maintain necessary documentation for the project
- Fix the issues reported by application users
- Plan, build, and design solutions with a mental note of future requirements
- Coordinate with the development team to manage fixes, code changes, and merging
Location: Mumbai
Working Mode: Remote
What are we looking for
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- Minimum 2 years of professional experience in software development, with a focus on machine learning and full stack development.
- Strong proficiency in Python programming language and its machine learning libraries such as TensorFlow, PyTorch, or scikit-learn.
- Experience in developing and deploying machine learning models in production environments.
- Proficiency in web development technologies including HTML, CSS, JavaScript, and front-end frameworks such as React, Angular, or Vue.js.
- Experience in designing and developing RESTful APIs and backend services using frameworks like Flask or Django.
- Knowledge of databases and SQL for data storage and retrieval.
- Familiarity with version control systems such as Git.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in a fast-paced and dynamic team environment.
- Good to have Cloud Exposure
ROLE DESCRIPTION
The ideal candidate will be passionate about building resilient, scalable, and high-performance distributed systems products. This individual will thrive and succeed in delivering high-quality technology products in a fast-paced and rapid growth environment where priorities could shift quickly. We are looking for an engineer who prioritizes well, communicates clearly, and understands how to drive a high level of focus and excellence within a strong team. This person has an innate drive to build a culture centered on customer focus, efficient execution, high quality, rigorous testing, deep monitoring, and solid software engineering practices.
WHO WILL LOVE THIS JOB?
• Attracted to creativity, innovation, and eagerness to learn.
• Alignment to a fast-paced organization and its short-term and long-term goals.
• An engaging, open, genuine personality that naturally encourages interaction with individuals at all levels. • Strong value system and sense of ethics.
• Absolute dedication to premium quality.
• Want to build a strong core product team capable of developing solutions for complex, industry-first problems
• Build a balance of experience, knowledge and new learnings
ROLE & RESPONSIBILITIES
• Driving the success of the software engineering team at Datamotive.
• Driving go-nogo decisions of product releases to customers.
• Drive QE team for developing test scenarios & product automation.
• Collaborating with senior and peer engineers to identify and improve upon feature improvements.
• Build a strong customer focussed mindset for qualifying product features and use cases.
• Develop, Build & Perform functional, scale and performance testing.
• Assist in Identifying, Researching & Designing newer features and cloud platform support in areas of disaster recovery, data protection, workload migration etc.
• Conduct pilot tests to assess the functionality of newly developed programs.
• Front-facing customers for product introduction, knowledge transfer, solutions, bug triaging etc.
• Assist customers by giving product demos, conducting POCs, training etc.
• Manage Datamotive infrastructure, bringing innovative automation for optimizing infrastructure usage through monitoring and scripting.
• Design test environments to simulate customer behaviors and use cases in VMware vSphere, AWS, GCP, and Azure clouds.
• Help write technical documentation, and generate marketing content like blogs, webinars, seminars etc.
TECHNICAL SKILLS
• 8 - 12 years of experience in software testing with a relevant domain understanding of Data Protection, Disaster Recovery, and Ransomware Recovery.
• A strong understanding and demonstrable experience with at least one of the major public cloud platforms (GCP, AWS, Azure or VMware)
• A strong understanding and experience in qualifying complex, distributed systems at feature, scale and performance.
• Insights into the development of client-server SaaS applications with good breadth across networking, storage, micro-services, and other web technologies.
• Programming knowledge in either Python, Shell scripts or Powershell.
• Strong knowledge of test automation frameworks E.g. Selenium, Cucumber, and Robot frameworks
• Should be a computer science graduate with strong fundamentals & problem-solving abilities.
• Good understanding of virtualization, storage and cloud platforms like VMware, AWS, GCP, Azure and/or Kubernetes will be preferable.
WHAT’S IN IT FOR YOU?
• Impact. Backed by our TEAM, Investors and Advisors, Datamotive is on the path to rapid growth. As we take our products to the market, your position will be vital as you play a crucial role in innovating and developing our products, identifying new features, and filing patents, while also gaining personal experience and responsibilities. As a key player in our company's success, the impact of your work will be felt as we grow as an organization.
• Career Growth. At Datamotive, we highly value the input made by each employee to help us achieve our company goals. To this end, we strive to ensure that everyone has access, and exposure to be up-to-date in the industry, and to learn and improve their expertise. We ensure that each employee is given exposure to understanding the functional and technical elements of our products as well as all related business functions. As your knowledge grows, so do the opportunities for advancement to more senior opportunities or into other areas of our business. We strive to be a company where you can truly chart out a career path for yourself.
at Concinnity Media Technologies
- Develop, train, and optimize machine learning models using Python, ML algorithms, deep learning frameworks (e.g., TensorFlow, PyTorch), and other relevant technologies.
- Implement MLOps best practices, including model deployment, monitoring, and versioning.
- Utilize Vertex AI, MLFlow, KubeFlow, TFX, and other relevant MLOps tools and frameworks to streamline the machine learning lifecycle.
- Collaborate with cross-functional teams to design and implement CI/CD pipelines for continuous integration and deployment using tools such as GitHub Actions, TeamCity, and similar platforms.
- Conduct research and stay up-to-date with the latest advancements in machine learning, deep learning, and MLOps technologies.
- Provide guidance and support to data scientists and software engineers on best practices for machine learning development and deployment.
- Assist in developing tooling strategies by evaluating various options, vendors, and product roadmaps to enhance the efficiency and effectiveness of our AI and data science initiatives.
WHO WILL LOVE THIS JOB?
• Attracted to creativity, innovation, and eagerness to learn
• Alignment to a fast-paced organization and its short-term and long-term goals
• An engaging, open, genuine personality that naturally encourages interaction with individuals at all levels
• Strong value system and sense of ethics
• Absolute dedication to premium quality
• Want to build strong core product team capable of developing solutions for complex, industry-first problems.
• Build balance of experience, knowledge, and new learnings
ROLES AND RESPONSIBILITIES?
• Driving the success of the software engineering team at Datamotive.
• Collaborating with senior and peer engineers to prioritize and deliver features on the roadmap.
• Build strong development team with focus on building optimized & usable solutions.
• Research, Design & Develop distributed solution to handle workload mobility across multi & hybrid clouds
• Assist in Identifying, Researching & Designing newer features and cloud platform support in areas of disaster recovery, data protection, workload migration etc.
• Assist in building product roadmap.
• Conduct pilot tests to assess the functionality of newly developed programs.
• Front facing customers for product introduction, knowledge transfer, solutioning, bugs triaging etc.
• Assist customers by giving product demos, conducting POCs, trainings etc.
• Manage Datamotive infrastructure, bring innovative automation for optimizing infrastructure usage through monitoring and scripting.
• Design test environments to simulate customer behaviours and use cases in VMware vSphere, AWS, GCP, Azure clouds.
• Help write technical documentation, generate marketing content like blogs, webinars, seminars etc.
TECHNICAL SKILLS
• 3 – 8 years of experience in software development with relevant domain understanding of Data Protection, Disaster Recovery, Ransomware Recovery.
• A strong understanding and demonstrable experience with at least one of the major public cloud platforms (GCP, AWS, Azure or VMware)
• A strong understanding and experience of designing and developing architecture of complex, distributed systems.
• Insights into development of client-server SaaS applications with good breadth across networking, storage, micro-services, and other web technologies.
• Experience of building and leading strong development teams with systems product development background
• Programming knowledge in either of GO Lang, C, C++, Python or Shell script.
• Should be a computer science graduate with strong fundamentals & problem-solving abilities.
• Good understanding of virtualization, storage and cloud platforms like VMware, AWS, GCP, Azure and/or Kubernetes will be preferable
About Us
Mindtickle provides a comprehensive, data-driven solution for sales readiness and enablement that fuels revenue growth and brand value for dozens of Fortune 500 and Global 2000 companies and hundreds of the world’s most recognized companies across technology, life sciences, financial services, manufacturing, and service sectors.
With purpose-built applications, proven methodologies, and best practices designed to drive effective sales onboarding and ongoing readiness, mindtickle enables company leaders and sellers to continually assess, diagnose and develop the knowledge, skills, and behaviors required to engage customers and drive growth effectively. We are funded by great investors, like – Softbank, Canaan partners, NEA, Accel Partners, and others.
Job Brief
We are looking for a rockstar researcher at the Center of Excellence for Machine Learning. You are responsible for thinking outside the box, crafting new algorithms, developing end-to-end artificial intelligence-based solutions, and rightly selecting the most appropriate architecture for the system(s), such that it suits the business needs, and achieves the desired results under given constraints.
Credibility:
- You must have a proven track record in research and development with adequate publication/patenting and/or academic credentials in data science.
- You have the ability to directly connect business problems to research problems along with the latest emerging technologies.
Strategic Responsibility:
- To perform the following: understanding problem statements, connecting the dots between high-level business statements and deep technology algorithms, crafting new systems and methods in the space of structured data mining, natural language processing, computer vision, speech technologies, robotics or Internet of things etc.
- To be responsible for end-to-end production level coding with data science and machine learning algorithms, unit and integration testing, deployment, optimization and fine-tuning of models on cloud, desktop, mobile or edge etc.
- To learn in a continuous mode, upgrade and upskill along with publishing novel articles in journals and conference proceedings and/or filing patents, and be involved in evangelism activities and ecosystem development etc.
- To share knowledge, mentor colleagues, partners, and customers, take sessions on artificial intelligence topics both online or in-person, participate in workshops, conferences, seminars/webinars as a speaker, instructor, demonstrator or jury member etc.
- To design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance.
- To collaborate within the product streams and team to bring best practices and leverage world-class tech stack.
- To set up every essentials (tracking / alerting) to make sure the infrastructure / software built is working as expected.
- To search, collect and clean Data for analysis and setting up efficient storage and retrieval pipelines.
Personality:
- Requires excellent communication skills – written, verbal, and presentation.
- You should be a team player.
- You should be positive towards problem-solving and have a very structured thought process to solve problems.
- You should be agile enough to learn new technology if needed.
Qualifications:
- B Tech / BS / BE / M Tech / MS / ME in CS or equivalent from Tier I / II or Top Tier Engineering Colleges and Universities.
- 6+ years of strong software (application or infrastructure) development experience and software engineering skills (Python, R, C, C++ / Java / Scala / Golang).
- Deep expertise and practical knowledge of operating systems, MySQL and NoSQL databases(Redis/couchbase/mongodb/ES or any graphDB).
- Good understanding of Machine Learning Algorithms, Linear Algebra and Statistics.
- Working knowledge of Amazon Web Services(AWS).
- Experience with Docker and Kubernetes will be a plus.
- Experience with Natural Language Processing, Recommendation Systems, or Search Engines.
Our Culture
As an organization, it’s our priority to create a highly engaging and rewarding workplace. We offer tons of awesome perks, great learning opportunities & growth.
Our culture reflects the globally diverse backgrounds of our employees along with our commitment to our customers, each other, and a passion for excellence.
To know more about us, feel free to go through these videos:
1. Sales Readiness Explained: https://www.youtube.com/watch?v=XyMJj9AlNww&t=6s
2. What We Do: https://www.youtube.com/watch?v=jv3Q2XgnkBY
3. Ready to Close More Deals, Faster: https://www.youtube.com/watch?v=nB0exreVU-s
To view more videos, please access the below-mentioned link:
https://www.youtube.com/c/mindtickle/videos
Mindtickle is proud to be an Equal Opportunity Employer
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work - In compliance with applicable laws, all persons hired will be required to verify identity and eligibility to work in the respective work locations and to complete the required employment eligibility verification document form upon hire.
- Seeking an Individual carrying around 5+ yrs of experience.
- Must have skills - Jenkins, Groovy, Ansible, Shell Scripting, Python, Linux Admin
- Terraform, AWS deep knowledge to automate and provision EC2, EBS, SQL Server, cost optimization, CI/CD pipeline using Jenkins, Server less automation is plus.
- Excellent writing and communication skills in English. Enjoy writing crisp and understandable documentation
- Comfortable programming in one or more scripting languages
- Enjoys tinkering with tooling. Find easier ways to handle systems by doing some research. Strong awareness around build vs buy.
Experience: 4-8 years
Notice Period: 15-30 days
Mandatory Skill Set:
Front End: ReactJS / Javascript / CSS / jQuery / Bootstrap
Backend: Python /Django/ Flask / Tornado
Responsibilities :
- Responsible for design and architecture of functional prototypes and production ready systems
- Uses open source frameworks as appropriate. Django Preferred.
- Develops Python and JavaScript code as necessary.
- Co-ordinating with team lead / product team and contributing to business requirements in terms of code.
- Write Rest APIs and documentation to support consumption of these APIs.
- Communicate technical concepts with trade offs, risks, and benefits.
- Evaluate and resolve product related issues.
Requirements :
- Demonstrable experience writing clean, thoughtful and business oriented
- Strong understanding of JavaScript, HTML, and CSS3. Knowledge of ReactJS and Redux is a plus.
- Good understanding of REST API's and experience in building them. Knowledge of Django Rest Framework is a plus.
- Experience on asynchronous request handling, partial page updates, and AJAX.
- Proficient understanding of cross browser compatibility issues and ways to work around such issues
- Proficient understanding of code versioning tools, such as Git / Mercurial / SVN
- Proactive in terms of sharing updates across the entire team.
About Us -Celebal Technologies is a premier software services company in the field of Data Science, Big Data and Enterprise Cloud. Celebal Technologies helps you to discover the competitive advantage by employing intelligent data solutions using cutting-edge technology solutions that can bring massive value to your organization. The core offerings are around "Data to Intelligence", wherein we leverage data to extract intelligence and patterns thereby facilitating smarter and quicker decision making for clients. With Celebal Technologies, who understands the core value of modern analytics over the enterprise, we help the business in improving business intelligence and more data-driven in architecting solutions.
Key Responsibilities
• As a part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team city, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
• Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively.
Fintech Leader, building a product on data Science
Data Scientist-
We are looking for an experienced Data Scientists to join our engineering team and
help us enhance our mobile application with data. In this role, we're looking for
people who are passionate about developing ML/AI in various domains that solves
enterprise problems. We are keen on hiring someone who loves working in fast paced start-up environment and looking to solve some challenging engineering
problems.
As one of the earliest members in engineering, you will have the flexibility to design
the models and architecture from ground up. As any early-stage start-up, we expect
you to be comfortable wearing various hats, and be proactive contributor in building
something truly remarkable.
Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases like MySQL, Oracle, SQL Server, NoSQL, etc. is
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
1. Should have worked in Agile methodology and microservices architecture
2. Should have 7+ years of experience in Python and Django framework
3. Should have a good knowledge of DRF
4. Should have knowledge of User Auth (JWT, OAuth2), API Auth, Access Control List, etc.
5. Should have working experience in session management in Django
6. Should have expertise in the Django MVC and uses of templates in frontend
7. Should have working experience in PostgreSQL
8. Should have working experience in the RabbitMQ messaging channel and Celery Analytics
9. Good to have javascript implementation knowledge in Django templates
About us:
Arista Networks was founded to pioneer and deliver software driven cloud networking solutions for large datacenter storage and computing environments. Arista's award-winning platforms, ranging in Ethernet speeds from 10 to 400 gigabits per second, redefine scalability, agility and resilience. Arista has shipped more than 20 million cloud networking ports worldwide with CloudVision and EOS, an advanced network operating system. Committed to open standards, Arista is a founding member of the 25/50GbE consortium. Arista Networks products are available worldwide directly and through partners.
About the job
Arista Networks is looking for world-class software engineers to join our Extensible Operating System (EOS) software development team.As a core member of the EOS team, you will be part of a fast-paced,high caliber team-building features to run the world's largest data center networks.Your software will be a key component of Arista's EOS, Arista's unique, Linux-based network operating system that runs on all of Arista's data center networking products.
The EOS team is responsible for all aspects of the development and delivery of software meant to run on the various Arista switches.You will work with your fellow engineers and members of the marketing team to gather and understand the functional and technical requirements for upcoming projects.You will help write functional specifications, design specifications, test plans, and the code to bring all of these to life.You will also work with customers to triage and fix problems in their networks. Internally, you will develop automated tests for your software, monitor the execution of those tests, and triage and fix problems found by your tests.At Arista, you will own your projects from definition to deployment, and you will be responsible for the quality of everything you deliver.
This role demands strong and broad software engineering fundamentals, and a good understanding of networking including capabilities like L2, L3, and fundamentals of commercial switching HW.Your role will not be limited to a single aspect of EOS at Arista, but cover all aspects of EOS.
Responsibilities:
- Write functional specifications and design specifications for features related to forwarding traffic on the internet and cloud data centers.
- Independently implement solutions to small-sized problems in our EOS software, using the C, C++, and python programming languages.
- Write test plan specifications for small-sized features in EOS, and implement automated test programs to execute the cases described in the test plan.
- Debug problems found by our automated test programs and fix the problems.
- Work on a team implementing, testing, and debugging solutions to larger routing protocol problems.
- Work with Customer Support Engineers to analyze problems in customer networks and provide fixes for those problems when needed in the form of new software releases or software patches.
- Work with the System Test Engineers to analyze problems found in their tests and provide fixes for those problems.
- Mentor new and junior engineers to bring them up to speed in Arista’s software development environment.
- Review and contribute to the specifications and implementations written by other team members.
- Help to create a schedule for the implementation and debugging tasks, update that schedule weekly, and report it to the project lead.
Qualifications:
- BS Computer Science/Electrical Engineering/Computer Engineering 3-10 years experience, or MS Computer Science/Electrical Engineering/Computer Engineering + 5 years experience, Ph.D. in Computer Science/Electrical Engineering/Computer Engineering, or equivalent work experience.
- Knowledge of C, C++, and/or python.
- Knowledge of UNIX or Linux.
- Understanding of L2/L3 networking including at least one of the following areas is desirable:
- IP routing protocols, such as RIP, OSPF, BGP, IS-IS, or PIM.
- Layer 2 features such as 802.1d bridging, the 802.1d Spanning Tree Protocol, the 802.1ax Link Aggregation Control Protocol, the 802.1AB Link Layer Discovery Protocol, or RFC 1812 IP routing.
- Ability to utilize, test, and debug packet forwarding engine and a hardware component’s vendor provided software libraries in your solutions.
- Infrastructure functions related to distributed systems such as messaging, signalling, databases, and command line interface techniques.
- Hands on experience in the design and development of ethernet bridging or routing related software or distributed systems software is desirable.
- Hands on experience with enterprise or service provider class Ethernet switch/router system software development, or significant PhD level research in the area of network routing and packet forwarding.
- Applied understanding of software engineering principles.
- Strong problem solving and software troubleshooting skills.
- Ability to design a solution to a small-sized problem, and implement that solution without outside help.Able to work on a small team solving a medium-sized problem with limited oversight.
Resources:
- Arista's Approach to Software with Ken Duda (CTO): https://youtu.be/TU8yNh5JCyw
- Additional information and resources can be found at https://www.arista.com/en/
About the company
A strong cross-functional team of designers, software developers, and hardware experts who love creating technology products and services. We are not just an outsourcing partner, but with our deep expertise across several business verticals, we bring our best practices so that your product journey is like a breeze.
We love healthcare, medical devices, finance, and consumer electronics but we love almost everything where we can build technology products and services. In the past, we have created several niche and novel concepts and products for our customers, and we believe we still learn every day to widen our horizons!
Introduction - Advanced Technology Group
As an extension to solving the continuous medical education needs of doctors through the courses platform, Iksha Labs also developed several cutting-edge solutions for simulated training and education, including
- Virtual Reality and Augmented Reality based surgical simulations
- Hand and face-tracking-based simulations
- Remote immersive and collaborative training through Virtual Reality
- Machine learning-based auto-detection of clinical conditions from medical images
Introduction - Advanced Technology Group
As an extension to solving the continuous medical education needs of doctors through the courses platform, Iksha Labs developed several cutting-edge solutions for simulated training and education, including
- Virtual Reality and Augmented Reality based surgical simulations
- Hand and face-tracking-based simulations
- Remote immersive and collaborative training through Virtual Reality
- Machine learning-based auto-detection of clinical conditions from medical images
Job Description
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code.
Key Skills/Technology
- Good command of C, and C++ with Algorithms and Data Structures
- Image Processing
- Qt (Expertise)
- Python (Expertise)
- Embedded Systems
- Good working knowledge of STL/Boost Algorithms and Data structures
Responsibilities
- Develop quality software and web applications
- Analyze and maintain existing software applications
- Develop scalable, testable code
- Discover and fix programming bugs
Qualifications
Bachelor's degree or equivalent experience in Computer Science/Electronics and Communication or a related field.
Industry Type
Medical / Healthcare
Functional Area
IT Software - Application Programming, Maintenance
Avegen is a digital healthcare company empowering individuals to take control of their health and supporting healthcare professionals in delivering life-changing care. Avegen’s core product, HealthMachine®, is a cloud-hosted, next-generation digital healthcare engine for pioneers in digital healthcare, including healthcare providers and pharmaceutical companies, to deploy high-quality robust digital care solutions efficiently and effectively. We are ISO27001, ISO13485, and Cyber essentials certified; compliant with NHS Data protection toolkit and GDPR.
Job Summary:
We are looking for a Mobile Automation Tester who is passionate in Mobile App Automation and works in one or more mobile automation frameworks.
Roles and Responsibilities :
- Write, design, and execute automated tests by creating scripts that run testing functions automatically.
- Build test automation frameworks.
- Work in an agile development environment where developers and testers work closely together to ensure requirements are met.
- Design, document, manage and execute test cases, sets, and suites.
- Work in cross-functional project teams that include Development, Marketing, Usability, Software Quality Assurance, Customer Learning, and Support.
- Review test cases and automate whenever possible. -Educate team members on test automation and drive adoption.
- Integrate automated test cases into nightly build systems.
Required Skills:
- Previous experience working as a QA automation engineer.
- Experience in Mobile Testing. IOS automation and Android automation.
- Hands-on experience in any programming language like Java, python, javascript, Ruby, C#.
- Experience & knowledge of tools like JIRA, Selenium , Postman, Web and App test automation.
- Ability to deliver results under pressure.
- Self-development skills to keep up to date with fast-changing trends.
Good to Have Skills:
- Experience working with CI/CD pipelines like (Jenkins, Circle CI).
- API, DB Automation.
- Excellent scripting experience.
Educational Qualifications:
● Candidates with Bachelor / Master's degree would be preferred
Job Responsibilities:
Support, maintain, and enhance existing and new product functionality for trading software in a real-time, multi-threaded, multi-tier server architecture environment to create high and low level design for concurrent high throughput, low latency software architecture.
- Provide software development plans that meet future needs of clients and markets
- Evolve the new software platform and architecture by introducing new components and integrating them with existing ones
- Perform memory, cpu and resource management
- Analyze stack traces, memory profiles and production incident reports from traders and support teams
- Propose fixes, and enhancements to existing trading systems
- Adhere to release and sprint planning with the Quality Assurance Group and Project Management
- Work on a team building new solutions based on requirements and features
- Attend and participate in daily scrum meetings
Required Skills:
- JavaScript and Python
- Multi-threaded browser and server applications
- Amazon Web Services (AWS)
- REST
Job Role - HTML/CSS Developer with Jinja Support
Location - Pune
Experience - 1+ Years
Job Description:
We are seeking a highly skilled HTML/CSS developer to join our team at a SaaS startup. The ideal candidate will have experience with HTML, CSS, and Jinja, as well as a strong understanding of web development best practices.
Responsibilities:
- Develop and maintain web pages and web applications using HTML, CSS, and Jinja
- Collaborate with the development team to design and implement new features
- Write clean, maintainable, and efficient code
- Stay up-to-date with the latest web development trends and technologies
- Troubleshoot and debug any issues that arise
- Test and optimize web pages for maximum speed and scalability
Qualifications:
- Strong experience with HTML, CSS, and Jinja
- Experience with web development frameworks such as Flask or Django
- Strong understanding of web development best practices
- Familiarity with version control systems such as Git
- Strong problem-solving and debugging skills
- Excellent communication and teamwork abilities
This is a full-time position with a competitive salary and benefits, and the opportunity to work with a talented and passionate team in a rapidly growing startup. If you are passionate about web development and want to make a real impact in a dynamic and innovative company, we would love to hear from you.
About CrelioHealth:
CrelioHealth (formerly LiveHealth) is an IT product company in the Health care domain. We are an almost decade-old IT product organisation.
We are a flourishing, Open & Flexi culture organisation with a young team.
We are a group of young enthusiasts passionate about building the best line of products in healthcare diagnostics. Our product is LIMS & CRM used for Pathology Labs & Hospitals.
Our Product -
- CrelioHealth LIMS - Web-based LIMS (Laboratory Information Management System) and RIS (Radiology Information System) solution for automating your processes & managing the business better
- CrelioHealth CRM- Patient booking and engagement tool to take patient experience to the next level.
- CrelioHealth Inventory - Online platform to manage your lab inventory, stock, and purchases
Org link - https://creliohealth.in/
We are voted as #14 rank in G2’s List of Best Software Sellers for 2021.CrelioHealth (formerly LiveHealth) is a cloud-based LIS and RIS solution that enables Laboratory staff, doctors, and patients to access and manage medical information using the same platform easily.
Find out more at https://creliohealth.com/ or get updates on https://blog.creliohealth.in CrelioHealth for Diagnostics
Blog - CrelioHealth for Diagnostics
Enterprise Minds, with core focus on engineering products, automation and intelligence, partners customers on the trajectory towards increasing outcomes, relevance, and growth.
Harnessing the power of Data and the forces that define AI, Machine Learning and Data Science, we believe in institutionalizing go-to-market models and not just explore possibilities.
We believe in a customer-centric ethic without and people-centric paradigm within. With a strong sense of community, ownership, and collaboration our people work in a spirit of co-creation, co-innovation and co-development to engineer next-generation software products with the help of accelerators.
Through Communities we connect and attract talent that shares skills and expertise. Through Innovation Labs and global design studios we deliver creative solutions.
We create vertical isolated pods which has narrow but deep focus. We also create horizontal pods to collaborate and deliver sustainable outcomes.
We follow Agile methodologies to fail fast and deliver scalable and modular solutions. We constantly self-asses and realign to work with each customer in the most impactful manner.
Pre-requisites for the Role
1.Job ID-EMSP0120PS
- Primary skill:
- Splunk Development and Administration
- Secondary skill:
Python, Splunk DB connect, Visual Studio (C#), BitBucket, Kafka, Devops tools.
- 4. Years of Experience: 5-8 Years
- Location:(Hybrid Model)-Pune
- Position-1
- Budget- - 5-6 years (Max up to 17 LPA) and 6-8 Years (Max up to 22 LPA)
- NP- Immediate
Primary Role & Responsibility:
As a software engineer, your daily work involves technically challenging applications and projects where your code makes a direct contribution to the further development and upkeep of our software suite and to its application in projects.
You should be able to create Splunk dashboards, apps and should have good understanding of source interfaces for Splunk.
You should have idea of onboarding of data from different sources like JSON, XML, syslog, errorlog files.
As a software engineer, we expect much more from you than just the ability to design and develop good software. We find it important that you possess an inherent drive to get the best out of yourself every day, that you are inquisitive and that you are not intimidated by situations which require you to branch off from the beaten track. You work together with colleagues in a SCRUM team. In addition, you have regular contact with other software teams, software architects, testers and end users. Good communication skills are therefore extremely important, as well as the ability to think pro-actively and suggest possible improvements. This gives you every opportunity to contribute your personal input and grow and develop within the department.
The often complex functionality of the software includes business logic, controls for logistical transport, communication with external computer systems, reporting, data analysis and simulation. This functionality is spread across various components. You design, program and test the software based on a design concept and a set of requirements. In some cases, you will have to personally formulate these requirements together with the (end) users and / or internal stakeholders. Learn more about the Software modular stack
Desired Profile & Experience: Knowledge of Kafka and experience with Java
- Splunk Architecture, on-premise and cloud based deployment.
- IoT edge.
- Analytical skills and capabilities to understand how raw (unstructured) data needs to be transformed into processed information.
Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.
Roles & Responsibilities:
Work on implementation of real-time and batch data pipelines for disparate data sources.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
- Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
- Identify improvement areas in the current data system and implement optimizations.
- Work on specific areas of data governance including metadata management and data quality management.
- Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
- Develop Proof-of-Concepts to validate new technology solutions or advancements.
- Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
- Work on building intelligent systems using various AI/ML algorithms.
Desired Experience/Skill:
- Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
- Experience with private and public cloud architectures with pros/cons.
- Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
- Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
- Knowledge of Kafka, Redis is preferred
- Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
• Work with various stakeholders, understand requirements, and build solutions/data pipelines
that address the needs at scale
• Bring key workloads to the clients’ Snowflake environment using scalable, reusable data
ingestion and processing frameworks to transform a variety of datasets
• Apply best practices for Snowflake architecture, ELT and data models
Skills - 50% of below:
• A passion for all things data; understanding how to work with it at scale, and more importantly,
knowing how to get the most out of it
• Good understanding of native Snowflake capabilities like data ingestion, data sharing, zero-copy
cloning, tasks, Snowpipe etc
• Expertise in data modeling, with a good understanding of modeling approaches like Star
schema and/or Data Vault
• Experience in automating deployments
• Experience writing code in Python, Scala or Java or PHP
• Experience in ETL/ELT either via a code-first approach or using low-code tools like AWS Glue,
Appflow, Informatica, Talend, Matillion, Fivetran etc
• Experience in one or more of the AWS especially in relation to integration with Snowflake
• Familiarity with data visualization tools like Tableau or PowerBI or Domo or any similar tool
• Experience with Data Virtualization tools like Trino, Starburst, Denodo, Data Virtuality, Dremio
etc.
• Certified SnowPro Advanced: Data Engineer is a must.
Job Summary
As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base
- Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
- Work with teams of smart collaborators. Be responsible for their appraisals and career development.
- Participate and lead executive presentations with client leadership stakeholders.
- Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
- See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.
Role & Responsibilities
- Serve as expert in Data Science, build framework to develop Production level DS/AI models.
- Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
- Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
- Lead and manage the onsite-offshore relation, at the same time adding value to the client.
- Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
- Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
- Present results, insights, and recommendations to senior management with an emphasis on the business impact.
- Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
- Lead or contribute to org level initiatives to build the Tredence of tomorrow.
Qualification & Experience
- Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
- 6-10+ years of experience in data science, building hands-on ML models
- Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
- Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
- Knowledge of programming languages SQL, Python/ R, Spark.
- Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
- Experience with cloud computing services (AWS, GCP or Azure)
- Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
- Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
- Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
- Knowledge in GPU code optimization, Spark MLlib Optimization.
- Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
- Experience with ML CI/CD pipelines.
Front End developers
- Angular.JS experience
- MongoDB query and aggregation experience (not a database administrator)
- GraphQL experience
- Node.JS and Typescript experience
- CSS and SCSS experience
- CI/CD experience with GitHub actions
Backend Developers:
- Software development experience, one of Python (preferred) or Node.JS/Typescript)
- Experience with Messaging architectures - RabbitMQ (preference) or Kafka
- Experience with docker-containers
- Experience with Apache NiFi (valued but not necessary)
- Experience with designing or implementing horizontally scalable solutions
- Experience working with RESTful APIs
- CI/CD experience with GitHub actions
- Experience with Azure cloud