With a core belief that advertising technology can measurably improve the lives of patients, DeepIntent is leading the healthcare advertising industry into the future. Built purposefully for the healthcare industry, the DeepIntent Healthcare Advertising Platform is proven to drive higher audience quality and script performance with patented technology and the industry’s most comprehensive health data. DeepIntent is trusted by 600+ pharmaceutical brands and all the leading healthcare agencies to reach the most relevant healthcare provider and patient audiences across all channels and devices. For more information, visit DeepIntent.com or find us on LinkedIn.
We are seeking a skilled and experienced Site Reliability Engineer (SRE) to join our dynamic team. The ideal candidate will have a minimum of 3 years of hands-on experience in managing and maintaining production systems, with a focus on reliability, scalability, and performance. As an SRE at Deepintent, you will play a crucial role in ensuring the stability and efficiency of our infrastructure, as well as contributing to the development of automation and monitoring tools.
Responsibilities:
- Deploy, configure, and maintain Kubernetes clusters for our microservices architecture.
- Utilize Git and Helm for version control and deployment management.
- Implement and manage monitoring solutions using Prometheus and Grafana.
- Work on continuous integration and continuous deployment (CI/CD) pipelines.
- Containerize applications using Docker and manage orchestration.
- Manage and optimize AWS services, including but not limited to EC2, S3, RDS, and AWS CDN.
- Maintain and optimize MySQL databases, Airflow, and Redis instances.
- Write automation scripts in Bash or Python for system administration tasks.
- Perform Linux administration tasks and troubleshoot system issues.
- Utilize Ansible and Terraform for configuration management and infrastructure as code.
- Demonstrate knowledge of networking and load-balancing principles.
- Collaborate with development teams to ensure applications meet reliability and performance standards.
Additional Skills (Good to Know):
- Familiarity with ClickHouse and Druid for data storage and analytics.
- Experience with Jenkins for continuous integration.
- Basic understanding of Google Cloud Platform (GCP) and data center operations.
Qualifications:
- Minimum 3 years of experience in a Site Reliability Engineer role or similar.
- Proven experience with Kubernetes, Git, Helm, Prometheus, Grafana, CI/CD, Docker, and microservices architecture.
- Strong knowledge of AWS services, MySQL, Airflow, Redis, AWS CDN.
- Proficient in scripting languages such as Bash or Python.
- Hands-on experience with Linux administration.
- Familiarity with Ansible and Terraform for infrastructure management.
- Understanding of networking principles and load balancing.
Education:
Bachelor's degree in Computer Science, Information Technology, or a related field.
DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.
DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.
DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
About DeepIntent
About
DeepIntent is the leading independent healthcare marketing technology company built purposefully to influence patient health and business outcomes. The DeepIntent Healthcare Marketing Platform is the first and only platform that uniquely combines real-world health data, premium media partnerships, and custom integrations to reach patients and providers across any device. This enables healthcare marketers to plan, activate, optimize and measure campaigns that drive measurable patient and business outcomes, all within a single platform. DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.
Tech stack
Candid answers by the company
The DeepIntent Healthcare Marketing Platform is the first and only platform that uniquely combines real-world health data, premium media partnerships, and custom integrations to reach patients and providers across any device. This enables healthcare marketers to plan, activate, optimize and measure campaigns that drive measurable patient and business outcomes, all within a single platform.
Photos
Connect with the team
Similar jobs
About Indee
Indee is among the leading providers of a proprietary platform for secure video distribution and streaming, used by some of the world’s largest media companies, including Netflix, Paramount Pictures, Disney, and over 1100 other companies, big and small. Indee has grown 5x in the last 3 years and is scaling up at a rapid rate.
About the role
We are seeking a highly skilled and experienced Automation Engineer to join our dynamic team. As an Automation Engineer, you will play a key role in designing, implementing, and maintaining our automation testing framework. The primary focus of this role will be on utilizing Selenium, Pytest, Allure reporting, Python Requests, and Boto3 for automation testing and infrastructure management.
Responsibilities:
- Develop and maintain automated test scripts using Selenium WebDriver and Pytest to ensure the quality of web applications.
- Implement and enhance the automation testing framework to support scalability, reliability, and efficiency.
- Generate comprehensive test reports using Allure reporting for test result visualization and analysis.
- Conduct API testing using Python Requests, ensuring the functionality and reliability of backend services.
- Utilize Boto3 for automation of AWS infrastructure provisioning, configuration, and management.
- Collaborate with cross-functional teams, including developers, QA engineers, and DevOps engineers, to understand project requirements and deliver high-quality solutions.
- Identify opportunities for process improvement and optimization within the automation testing process.
- Provide technical expertise and guidance to junior team members, fostering a culture of continuous learning and development.
- Stay updated on industry trends and emerging technologies, incorporating them into our automation testing practices as appropriate.
- Participate in code reviews, ensuring adherence to coding standards and best practices.
Requirements:
- Strong programming skills in Python, with proficiency in writing clean, maintainable code.
- Experience with cloud infrastructure management and automation using AWS services and Boto3.
- Solid understanding of software testing principles, methodologies, and best practices.
- Excellent problem-solving skills and attention to detail.
- Ability to work effectively both independently and collaboratively in a fast-paced environment.
- Strong communication and interpersonal skills, with the ability to interact with stakeholders at all levels.
- Passion for technology and a desire to continuously learn and improve.
- Prior experience in Agile development methodologies.
- Experience with performance testing using Locust is considered a plus.
Qualifications:
- Education: Bachelor's degree in Computer Science, Software Engineering, or related field; Master’s degree preferred.
- Experience: 3 - 5 years of proven experience in automation testing using Selenium WebDriver, Pytest, Appium, Allure reporting, Python Requests, and Boto3
Benefits:
- Competitive salary and comprehensive benefits package.
- Opportunity to work with cutting-edge technologies and industry-leading experts.
- Flexible work environment with the option for remote work (hybrid).
- Professional development opportunities and support for continued learning.
- Dynamic and collaborative company culture with opportunities for growth and advancement.
If you are a highly motivated and skilled Automation Engineer looking to take the next step in your career, we encourage you to apply for this exciting opportunity to join our team at Indee. Help us drive innovation and shape the future of technology!
Job Title: Test Engineer
Job Description:
We are seeking a Test Engineer with 3 to 6 years of experience to join our dynamic testing team. The ideal candidate will have a strong background in manual/Automation testing, excellent communication skills, the ability to write comprehensive test cases, and a deep understanding of the domain in which our products operate. The Manual Tester should be a proactive team player, capable of taking ownership of their work, maintaining good rapport within the team and across other teams, and providing clear and accurate documentation of testing activities. Hands on experience in automation testing and API testing.
Qualifications and Skills:
- 3 to 6 years of experience in manual testing and automation testing.
- Excellent communication skills to effectively collaborate with team members and stakeholders.
- Strong test case writing/execution and Automation scripting skills.
- Deep domain knowledge relevant to the software under test.
- Proactive and self-motivated with a strong sense of ownership.
- Effective documentation and reporting skills.
- Experience with Web Automation is mandatory and mobile automation is an added advantage.
- Experience with API testing.
Job Overview:
As a Lead ETL Developer for a very large client of Paradigm, you are in charge of design and creation of data warehouse related functions such as, extraction, transformation and loading of data and expected to have specialized working knowledge in Cloud platforms especially Snowflake. In this role, you’ll be part of Paradigm’s Digital Solutions group, where we are looking for someone with the technical expertise to build and maintain sustainable ETL Solutions around data modeling and data profiling to support identified needs and expectations from the client.
Delivery Responsibilities
- Lead the technical planning, architecture, estimation, develop, and testing of ETL solutions
- Knowledge and experience in most of the following architectural styles: Layered Architectures, Transactional applications, PaaS-based architectures, and SaaS-based applications; Experience developing ETL-based Cloud PaaS and SaaS solutions.
- Create Data models that are aligned with clients requirements.
- Design, Develop and support ETL mapping, strong SQL skills with experience in developing ETL specifications
- Create ELT pipeline, Data Model Updates, & Orchestration using DBT / Streams/ Tasks / Astronomer & Testing
- Focus on ETL aspects including performance, scalability, reliability, monitoring, and other operational concerns of data warehouse solutions
- Design reusable assets, components, standards, frameworks, and processes to support and facilitate end to end ETL solutions
- Experience gathering requirements and defining the strategy for 3rd party data ingestion methodologies such as SAP Hana, and Oracle
- Understanding and experience on most of the following architectural styles: Layered Architectures, Transactional applications, PaaS-based architectures and SaaS-based applications; Experience designing ETL based Cloud PaaS and SaaS solutions.
Required Qualifications
- Expert Hands-on experience in the following:
- Technologies such as Python, Teradata, MYSQL, SQL Server, RDBMS, Apache Airflow, AWS S3, AWS Datalake, Unix scripting, AWS Cloud Formation, DevOps, GitHub
- Demonstrate best practices in implementing Airflow orchestration best practices such as creating DAG’s, and hands on knowledge in Python libraries including Pandas, Numpy, Boto3, Dataframe, connectors to different databases, APIs
- Data modelling, Master and Operational Data Stores, Data ingestion & distribution patterns, ETL / ELT technologies, Relational and Non-Relational DB's, DB Optimization patterns
- Develop virtual warehouses using Snowflake for data-sharing needs for both internal and external customers.
- Create Snowflake data-sharing capabilities that will create a marketplace for sharing files, datasets, and other types of data in real-time and batch frequencies
- At least 8+ years’ experience in ETL/Data Development experience
- Working knowledge of Fact / Dimensional data models and AWS Cloud
- Strong Experience in creating Technical design documents, source-to-target mapping, Test cases/resultsd.
- Understand the security requirements and apply RBAC, PBAC, ABAC policies on the data.
- Build data pipelines in Snowflake leveraging Data Lake (S3/Blob), Stages, Streams, Tasks, Snowpipe, Time travel, and other critical capabilities within Snowflake
- Ability to collaborate, influence, and communicate across multiple stakeholders and levels of leadership, speaking at the appropriate level of detail to both business executives and technology teams
- Excellent communication skills with a demonstrated ability to engage, influence, and encourage partners and stakeholders to drive collaboration and alignment
- High degree of organization, individual initiative, results and solution oriented, and personal accountability and resiliency
- Demonstrated learning agility, ability to make decisions quickly and with the highest level of integrity
- Demonstrable experience of driving meaningful improvements in business value through data management and strategy
- Must have a positive, collaborative leadership style with colleague and customer first attitude
- Should be a self-starter and team player, capable of working with a team of architects, co-developers, and business analysts
Preferred Qualifications:
- Experience with Azure Cloud, DevOps implementation
- Ability to work as a collaborative team, mentoring and training the junior team members
- Position requires expert knowledge across multiple platforms, data ingestion patterns, processes, data/domain models, and architectures.
- Candidates must demonstrate an understanding of the following disciplines: enterprise architecture, business architecture, information architecture, application architecture, and integration architecture.
- Ability to focus on business solutions and understand how to achieve them according to the given timeframes and resources.
- Recognized as an expert/thought leader. Anticipates and solves highly complex problems with a broad impact on a business area.
- Experience with Agile Methodology / Scaled Agile Framework (SAFe).
- Outstanding oral and written communication skills including formal presentations for all levels of management combined with strong collaboration/influencing.
Preferred Education/Skills:
- Prefer Master’s degree
- Bachelor’s Degree in Computer Science with a minimum of 8+ years relevant experience or equivalent.
- Analyze system requirements and prioritize tasks
- Write clean, testable code using .NET programming languages
- Develop technical specifications and architecture
- Test and debug various .NET applications
- Review and refactor code
- Deploy fully functional applications
- Upgrade existing programs
- Support junior developers' work
- Document development and operational procedures
- Architecting end-to-end prediction pipelines and managing them
- Scoping projects and mentoring 2-4 people
- Owning parts of the AI and data infrastructure of the organization
- Develop state-of-the-art deep learning/classical models
- Continuously learn new skills and technologies and implement them when relevant
- Contribute to the community through open-source, blogs, etc.
- Take a number of high-quality decisions about infrastructure, pipelines, and internal tooling.
What are we looking for
- Deep understanding of core concepts
- Broader knowledge of different types of problem statements and approaches
- Great hold on Python and the standard library
- Knowledge of industry-standard tools like scikit-learn, TensorFlow/PyTorch, etc.
- Experience with at least one among Computer Vision, Forecasting, NLP, or Recommendation
Systems a must
- A get shit done attitude
- A research mindset and a creative caliber to utilize previous work to your advantage.
- A helping/mentoring first approach towards work
Participation in multiple Avaloq Core Banking Platform implementations in various business / technical streams
Ability to develop high level software designs and solutions
Excellent analytical skills and systematic approach to problem solving
Ability to articulate complex technical issues to business stakeholders
Good understanding of core business processes and products in the private banking industry
Responsibilities:
- Develop the core platform components.
- Work on integrations with 3rd party systems.
- Co-ordinate with frontend team, designers, and product managers on development requirements
Requirements:
- At least 3 years of experience in developing and managing software systems.
- Proficiency in programming, data structures and algorithms.
- Deep understanding of caching technologies, databases and OOPS.
- Computer science degree from a tier1 college is a MUST.
- Experience in Python is preferred.
- Have a high bar for the quality of the product.
- Creative, independent, self-motivated, and willing to learn new technology.
- Possess a good understanding of QA methodologies and processes.
- Strong test planning ability with an understanding of enterprise storage workflows.
- Ability to take up a variety of roles in a startup environment.
- Excellent troubleshooting abilities spanning multiple software and hardware components (such as switches, storage systems, kernels).
- Good knowledge of storage stack, file system internals, and testing tools like fio, fsct, vdbench, dd, specFs etc.
- Strong knowledge of one or more storage protocols like NFS, iSCSI, CIFS, SMB, S3.
- Prior experience of testing storage filers using NFS/SMB/CIFS/S3 is a big plus.
- Experience with distributed systems (databases, storage, map-reduce frameworks, etc.).
- Good understanding of storage performance characteristics.
- Prior work experience in testing enterprise storage products is a must.
- Prior experience of programming, preferably in Go, Python, bash.
- Knowledge of ESXi, HyperV, KVM is a plus.
Containerization and CICD environment experience.
Strong analytical skills and problem solving aptitude attention to details
***ONLY FEMALES***
A Zone Manager ensures the Sales Strategy for the company is communicated and implemented within a specific Zone for which this role is responsible. Drive achievement of sales and revenue targets and develop a business implementation plan for the zone to ensure that it delivers against its targets.
Key Responsibilities:
- Appoint, train and develop new Representatives and Leaders in the zone.
- Achieve Agreed revenue and Active staff count targets and KPI s of the zone.
- Work with Leaders in the zone to develop their business. Observe and Coach Leaders in the PATD process and create a PATD environment in the zone.
- Conduct Communication meetings with Leaders and Representatives in the zone.
- Identify opportunities to grow coverage in the zone as well as drive company initiatives to increase sales in the zone.
Education Criteria:
Diploma/Graduation/Post Graduation in any stream with experience in a frontline sales, field sale in Direct Selling, FMCG.