
Similar jobs

Role Overview:
We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.
The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.
Key Responsibilities:
- Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
- Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
- Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
- Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
- Mentor junior engineers, perform code reviews, and promote engineering best practices.
- Stay current with evolving technologies in cloud, big data, and healthcare data standards.
- Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).
Required Skills & Qualifications:
- 4+ years of hands-on experience in data engineering roles.
- Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
- Proficient in Python for data processing and automation.
- Experience with Azure Databricks (or readiness to ramp up quickly).
- Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
- Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
- Familiarity with containerization tools like Docker and orchestration using Kubernetes.
- Exposure to CI/CD pipelines for data applications.
- Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
- Excellent problem-solving abilities and a proactive mindset.
- Strong communication and interpersonal skills to work in cross-functional teams.
3-6 years of experience in Functional testing with good foundation in technical expertise.
Experience in Capital Markets domain is MUST.
Exposure to API testing tools like SoapUI and Postman.
Well versed with SQL
Hands on experience in debugging issues using Unix commands.
Basic understanding of XML and JSON structures
Knowledge of Finesse is good to have

A stealth startup, born from research at Stanford's Human-Centred AI Lab, is building an AI Chief of Staff, designed to revolutionize how we work.
Most companies building AI agents are automating routine work. But we want to hold AI to a higher bar – an AI that “reasons” and helps humans deal with cognitive overload.
Come work at the intersection of cutting-edge NLP, generative AI and graph learning ML techniques. Help build technology that empowers, not exhausts!
Team: Stanford founder with background in data science, investing, and startups; Staff engineer at Amazon; and top-tier AI practitioners as Advisors (ex-Head of AI at Spotify, Senior Eng Director at Pinterest, AI 2030 Global Leader among others). Harvard and Stanford engineers contributed to early product development.
Role: Founding ML Engineer, full-time
Location: India. We can sponsor visas to relocate to the US based on fit.
Compensation: Around INR 25 lakh based on experience + equity
Required Qualifications
- Bachelor’s degree or master’s (preferred) in computer science, engineering, mathematics, or a related technical field from a top-tier institution
- 3+ years of experience building and deploying ML models in production, ideally with “0 to 1” ML work
Technical
- Expertise in NLP: Hugging Face Transformers, BERT, RoBERTa, and techniques like text classification and semantic clustering.
- Experience with LLMs (e.g., GPT-4, Gemini) for prompt engineering, fine-tuning for specific use cases, content generation, and cost-optimized ML infrastructure
- Deep Learning & Graph-Based ML: Proficiency in TensorFlow, PyTorch, XGBoost, GNNs, and dynamic graph databases like Neo4j for modelling relationships
- AI-powered chatbots, integrating LLMs for natural language interaction, feedback processing, and real-time user support
- Real-Time Data Processing & Integration: Skilled in using Apache Kafka and integrating ML models into scalable, reliable back-end systems
- Cloud Deployment: Experience deploying and managing ML services on AWS, GCP, or Azure with a focus on scalability and performance optimization
Primary Customer Facing Responsibilities:
- Handle customer service and support tickets efficiently, acting as the first point of contact over phone or email.
- Proactively communicate with customers regarding post-purchase support, including order installation, refunds, and payment issues.
- Utilize our client ticketing system to ensure prompt and effective resolution of customer queries.
Key Responsibilities:
Case Analysis and Critical Thinking:
- Develop comprehensive knowledge of client products and the ticketing system.
- Conduct thorough investigations to fully understand user issues, employing effective probing techniques.
Problem Solving:
- Provide accurate information and solutions for client software products or services.
- Offer alternative solutions when necessary, guiding users through the resolution process.
Post-Resolution Follow-Up:
- Ensure customer satisfaction by following up and updating customer status before case closure.
Client and Operational Responsibilities:
- Coordinate with team leads and managers for guidance on escalated cases.
- Record detailed events and problem resolutions in system logs.
- Forward customer feedback and suggestions to the appropriate internal team.
- Suggest improvements to processes and knowledge resources.
- Participate actively in team meetings and maintain effective communication with internal teams.
- Report to client leads and managers as required.
Requirements:
- Experience in help desk, software product support, and customer service.
- Tech-savvy with knowledge of computer operating systems, software, and hardware.
- Excellent written and verbal communication skills in English.
- Degree in a relevant field preferred.
- Proficient with Microsoft Office, Google Sheets, and other business software.
- Own a desktop/laptop with a stable internet connection.
- Demonstrated proactive, learning-oriented approach, with a focus on continuous process improvement.
Additional Information:
- This is a fully remote position with a 5-day work week.
- Requires approximately 9 hours of work per weekday.
- Compensation is competitive and billed hourly.
- Opportunity for long-term growth and additional responsibilities within the organization.
Our Commitment:
We value diversity and are committed to creating an inclusive environment for all employees. We encourage candidates of all backgrounds to apply.
Benefits:
- Work-from-home flexibility.
- Career advancement opportunities and professional development support.
- Supportive and collaborative team environment.
Application and Selection Process:
Initial Application:
- Interested candidates should submit their resume along with a cover letter detailing their relevant experience and why they are a good fit for this role via the Cutshort Application Portal.
Written Assessment:
- Selected candidates will be invited to complete a written assessment. This assessment is designed to evaluate technical and customer service skills and must be completed within 2 hours of commencement to ensure authenticity. Instructions and a deadline for the assessment will be provided upon selection.
Virtual Interview:
- Candidates who successfully pass the written assessment will be invited to a virtual interview with key stakeholders. This round will focus on assessing cultural fit, communication skills, and problem-solving abilities.
Onboarding and Training:
- Candidates who clear the interview stage will proceed to onboarding and training, marking the final stage of the selection process. This phase will familiarize them with company policies, tools, and the specific responsibilities of their role.
About Inevolution
Founded in 2009, InEvolution is a dynamic and evolving firm specializing in back-office, operations, and customer support services. We are a globally oriented team committed to delivering quality services with cost efficiency. Our ethos is to constantly adapt to the changing needs of the industry, effectively serving as an extension of our clients' teams.
We specialize in alleviating operational burdens from organizational leaders, allowing them to focus on growth. Our expertise spans across 24/7 client support, advanced technology utilization, proficient data management, and providing certified solutions. Our services include data hygiene, GDPR compliance, e-commerce campaign configuration, order management, reporting, sales support, and comprehensive customer support across various platforms. With our diverse experience in multiple business sectors, we offer robust solutions that cater to the unique needs of our clients.
Company Web : https://inevolution.in/

-
Understand long-term and short-term business requirements to precision match it with the capabilities of different distributed storage and computing technologies from the plethora of options available in the ecosystem.
-
Create complex data processing pipelines
-
Design scalable implementations of the models developed by our Data Scientist.
-
Deploy data pipelines in production systems based on CICD practices
-
Create and maintain clear documentation on data models/schemas as well as
transformation/validation rules
-
Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers
Good communication skill
Design dashboard module
Anaplan SDLC
Job responsibilities
- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges
- You will collaborate with Data Scientists in order to design scalable implementations of their models
- You will pair to write clean and iterative code based on TDD
- Leverage various continuous delivery practices to deploy, support and operate data pipelines
- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
- Create data models and speak to the tradeoffs of different modeling approaches
- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
- Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
- Professional skills
- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
- An interest in coaching, sharing your experience and knowledge with teammates
- You enjoy influencing others and always advocate for technical excellence while being open to change when needed
- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
Mandatory Skills
- Hands on experience with the programming language Java. (Mandatory)
- 3+ years of experience in leading a team, with professional automation testing experience using Java and Selenium.
- 5+ years of experience in automation testing and test framework development using Java and Selenium. (POM, Cucumber BDD, Hybrid etc.)
- Automation Tool: Selenium (Mandatory)
- Execution of scripts in Batch and Grid
- Setup test automation pipelines in CI/CD tools like Jenkins, AWS, and Azure DevOps etc.
- Flexibility to learn new programming languages, tools & frameworks.
Other Pre-Requisites
- Shall work as an individual contributor on a need basis.
- Should contribute to defining testing best practices for the teams & projects. Responsible for defining and implementing the test strategy and test automation solutions around functional and regression testing.
- Defining, monitoring, and evaluating individual goals/KRA’s for the team members.
- Responsible for project initiation, project planning, effort estimations, hiring and team building for new projects.
- Experience with Test Management tools like Atlassian JIRA, TestRail, ALM or similar tools.
- Passion for achieving excellence in technical, process, product quality and reliability.
- Strong in troubleshooting and root cause analysis abilities
- Must be extremely detail, technology oriented and possess excellent communications skills.
- Proactive, driven individual with a strong work ethics
Secondary Skills:
- BDD Cucumber will be an added advantage
- Parallel execution and integration to cloud platforms like BrowserStack, Perfecto, Saucelabs.
- API Testing: ReadyAPI, SoapUI, Postman, Rest-Assured or similar
- SQL queries: good hands-on on SQL queries and experience in testing Databases & Batch Jobs
- Mobile testing experience is an added advantage
- Experience on load testing/performance testing.
- Experience using issue and project tracking software
- Experience working in an agile structure
- Experience in developing POC’s and participating in pre-sales activities
* You are highly empathetic and passionate about helping others. You’re excited about building a strong design foundation within our team, but also eager to roll up your sleeves and do what it takes to ship products and features our users love.
* you’ve been part of a high functioning product and design team and have a knack for user research and UX design, but can be effective across a number of other product design functions. You’re ready to take ownership of our design process and the end-to-end designs of the most critical areas of our product.
What you'll do:
* Take high level product ideas from conception through execution, across web and mobile. We are a small team so you'll work across a wide range of areas - product strategy, user research, and UX/UI design in particular.
* Take joint ownership of metrics and strategy with the product team, and work collaboratively with engineering and operations to achieve goals
You should have:
* 4+ years experience as a product designer across web and mobile
* Mastery of the basic principles of visual design and typography
* Mastery in design tools like Figma or Sketch or XD
* Experience with planning and conducting user research
* A passion for hard problems and helping people
* Empathy with customers, and a strong desire to understand them
* Strong desire to work on a small team where you'll be involved in almost every facet of our product planning + execution process

