

About Artivatic
About
Company video


Photos
Connect with the team
Similar jobs
Bachelor’s degree in Computer Science,Engineering,Environmental Science or related field
5+ years in Product Management in Agritech and environmental technologies
Experience with Agile methodologies and product lifecycle management
Strong understanding of data-driven decision making and KPI tracking
Experience with A/B testing methodologies
Job Description
We are seeking a skilled DevOps Specialist to join our global automotive team. As DevOps Specialist, you will be responsible for managing operations, system monitoring, troubleshooting, and supporting automation workflows to ensure operational stability and excellence for enterprise IT projects. You will be providing support for critical application environments for industry leaders in the automotive industry.
Responsibilities:
Daily maintenance tasks on application availability, response times, pro-active incident tracking on system logs and resources monitoring
Incident Management: Monitor and respond to tickets raised by the DevOps team or end-users.
Support users with prepared troubleshooting Maintain detailed incident logs, track SLAs, and prepare root cause analysis reports.
Change & Problem Management: Support scheduled changes, releases, and maintenance activities. Assist in identifying and tracking recurring issues.
Documentation & Communication: Maintain process documentation, runbooks, and knowledge base articles. Provide regular updates to stakeholders on incidents and resolutions.
Tool & Platform Support: Manage and troubleshoot CI/CD tools (e.g., Jenkins, GitLab), container platforms (e.g., Docker, Kubernetes), and cloud services (e.g., AWS, Azure).
Requirements:
DevOps Skillset: Logfile analysis /troubleshooting (ELK Stack), Linux administration, Monitoring (App Dynamics, Checkmk, Prometheus, Grafana), Security (Black Duck, SonarQube, Dependabot, OWASP or similar)
Experience with Docker.
Familiarity with DevOps principles and ticket tools like ServiceNow.
Experience in handling confidential data and safety sensitive systems
Strong analytical, communication, and organizational abilities. Easy to work with.
Optional: Experience with our relevant business domain (Automotive / Manufacturing industry, especially production management systems). Familiarity with IT process frameworks SCRUM, ITIL.
Skills & Requirements
DevOps, Logfile Analysis, Troubleshooting, ELK Stack, Linux Administration, Monitoring, AppDynamics, Checkmk, Prometheus, Grafana, Security, Black Duck, SonarQube, Dependabot, OWASP, Docker, CI/CD, Jenkins, GitLab, Kubernetes, AWS, Azure, ServiceNow, Incident Management, Change Management, Problem Management, Documentation, Communication, Analytical Skills, Organizational Skills, SCRUM, ITIL, Automotive Industry, Manufacturing Industry, Production Management Systems.
1.Understand client business requirements and interpret into technical solutions
2. Build and maintain database stored procedures
3. Build and maintain ETL workflows
4. Perform quality assurance and testing at the unit level
5. Write and maintain user and technical documentation
6. Integrate Merkle database solutions with web services and cloud-based platforms. Must Have: SQL server stored procedures
Good/Nice to have: UNIX shell scripting, Talend/Tidal/Databricks/Informatica , JAVA/Python Experience : 2 to 10 years of experienced candidates
We are looking for skilled Full stack Developer to join our team. The successful candidate will be responsible for developing, maintaining, and scaling our server-side application logic using Python and related frameworks. As a full-stack developer, you will work closely with front-end developers, data scientists, and project managers to develop scalable and reliable software solutions.
Responsibilities:
Developing and maintaining server-side application logic using Python and related frameworks
Designing and implementing APIs and web services using RESTful principles
Collaborating with front-end developers to integrate user-facing elements with server-side logic
Designing and implementing efficient database schemas and queries
Developing and maintaining automated testing and deployment pipelines
Ensuring high performance and scalability of software applications
Having knowledge of Node.js will be added advantage
Troubleshooting and debugging software issues
Staying up-to-date with emerging trends and technologies in backend development
Requirements:
Strong experience with Python and related frameworks such as Flask, Django, or Pyramid
Proficient understanding of SQL and NoSQL databases
Experience with AWS or other cloud platforms
Familiarity with version control systems such as Git
Excellent problem-solving and debugging skills
Strong communication and collaboration skills
Ability to work in a fast-paced, collaborative environment
If you are a passionate Backend Python Developer with a strong desire to work on complex, challenging problems, we encourage you to apply.
InViz is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints.
TSDE - Data
Data Engineer:
- Should have total 3-6 Yrs of experience in Data Engineering.
- Person should have experience in coding data pipeline on GCP.
- Prior experience on Hadoop systems is ideal as candidate may not have total GCP experience.
- Strong on programming languages like Scala, Python, Java.
- Good understanding of various data storage formats and it’s advantages.
- Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources).
- Should have Business mindset to understand data and how it will be used for BI and Analytics purposes.
- Data Engineer Certification preferred
|
Experience in Working with GCP tools like |
|
|
Store : CloudSQL , Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore |
|
|
Ingest : Stackdriver, Pub/Sub, AppEngine, Kubernete Engine, Kafka, DataPrep , Micro services |
|
|
Schedule : Cloud Composer |
|
|
Processing: Cloud Dataproc, Cloud Dataflow, Cloud Dataprep |
|
|
CI/CD - Bitbucket+Jenkinjs / Gitlab |
|
|
Atlassian Suite |
|
|
|
.
BRIEF DESCRIPTION:
At-least 1 year of Python, Spark, SQL, data engineering experience
Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake
Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination
ROLE SCOPE:
Reverse engineer the existing/legacy ETL jobs
Create the workflow diagrams and review the logic diagrams with Tech Leads
Write equivalent logic in Python & Spark
Unit test the Glue jobs and certify the data loads before passing to system testing
Follow the best practices, enable appropriate audit & control mechanism
Analytically skillful, identify the root causes quickly and efficiently debug issues
Take ownership of the deliverables and support the deployments
REQUIREMENTS:
Create data pipelines for data integration into Cloud stacks eg. Azure Synapse
Code data processing jobs in Azure Synapse Analytics, Python, and Spark
Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.
Should be able to process .json, .parquet and .avro files
PREFERRED BACKGROUND:
Tier1/2 candidates from IIT/NIT/IIITs
However, relevant experience, learning attitude takes precedence
- Interaction with the Clients, understanding their requirements, collating required data for preparation and submission of the proposal to Banks/ FI's
- In-depth credit assessment involving Financial, Credit, Legal, Technical, Economic and risk analysis
- Structuring the proposal, preparing Information Memorandum, Teasers, Financial model, cash flow and CMA projections
- Ability to lead meetings and negotiations
- Solution-oriented attitude and resolving queries of all Banks and Clients
- Visiting clients office for understanding and collecting data
Content writers must know how to use a variety of writing and publishing programs, such as Microsoft Office, G Suite, and WordPress. Strong attention to detail and the ability to work under pressure are essential.
The selected intern's day-to-day responsibilities include:
1. Work on content writing for social media post
2. Creating content for the website
3. Implementing copywriting techniques to make the content more engaging
4. Doing on-page SEO to make the content google friendly
5. Create content that enlightens, informs, sells, and simply informs
6. Ideate & strategize the content plan
7. Must have excellent English written and verbal communication skills.
About OpsCruise
Digital business is driving a fundamental shift to cloud-native applications, creating a new set of operational and performance challenges ill-suited to the currently available solutions. At OpsCruise, we imagine a world of autonomous operations and are innovating a fundamentally different approach to performance management. OpsCruise’s vision is to automate the performance assurance of cloud applications using a model-driven closed-loop platform.
Team
The OpsCruise team represents a global and talented team that includes domain experts in IT Operations, Networking, Storage, Hyperscale Systems and AI/ML that have built market-leading solutions at companies such as Cisco, Google, Hitachi, HP, Infoblox, Oracle and VMWare among others.
Our engineering culture values creativity, pragmatism, honesty, and simplicity to solve hard problems the right way.
Role
We are looking for a Senior QA Engineer who will join our team building and rolling out our SaaS platform in the cloud AI/Ops space.
Our Technology Stack
Our product involves the following technology areas with one or more tools in use in each area:
- Container technologies, including creating Docker plugins and extensions
- Serverless technologies including instrumentation, addons
- Orchestrators including Kubernetes, OpenShift, Mesos, Swarm
- Metric generation and collection including Prometheus and tools such as Dynatrace and Datadog
- Tracing including OpenTracing, Jaeger
- Graph tools and Databases including neo4j, JanusGraph, TinkerPop/Gremlin
- TimeSeries databases such as Prometheus, OpenTSDB
- NoSQL and Indexing tools such as MongoDB, Cassandra, Solr and Elastic
- Languages including Java, Scala, Javascript, Python, R, and Go
- Messaging tools including Kafka, Akka
- Big Data tools including HDFS, YARN, Spark, Flink
- AI/ML techniques including Statistical Analysis, Classification, Deep Learning, etc.
- Cloud services: AWS, GCP, and Azure, their services in databases, networking and ML tools
- High performance User Interfaces including AngularJS, Vue, D3.js and local stores
- Authentication and Authorization including tools such as Okta and KeyCloak.
Responsibilities
- Understand user level requirements, write up the test strategy, derive test plan and test cases in detail.
- System and integration testing with automation of test cases using Python or Java.
- Use test frameworks such as Robot Framework.
- Use traffic generator tools such as JMeter for performance testing
- Set up the target test environments in AWS, Azure or GCP
- Containers based environment setup with Kubernetes and Docker, monitoring tools setup with Prometheus
- Debug incidents and issue to narrow down to a root cause
- Understand and reproduce internally issues reported by customers
Qualifications
The ideal candidate must have the following qualifications.
- B.E/B.Tech Degree from a reputed institution with at least 4 years of relevant experience.
- Hands-on experience with test automation using Python or Java.
- Experience with test frameworks such as Robot Framework or TestNG.
- Traffic generation tools usage such as curl-loader or JMeter
- QA engineers for Networking Technology products such as switches, routers, L4-L7 products testing can also apply.
- Knowledge of public clouds, AWS, Azure or GCP, is desired
- Working knowledge in Kubernetes, Docker or Openshift environments required
- Hands on experience with Linux
- Strong problem solving and debugging abilities
- Familiarity with continuous integration tools such as Jenkins or CircleCI
- Interest in machine learning (ML) and data science is a plus
Most importantly, you should be someone who is passionate about building new and innovative products that solve tough real-world problems.
LOCATION
Chennai, India
Job brief
We are looking for a competent Account Executive to find business opportunities and manage customer relationships. You’ll be directly responsible for the preservation and expansion of our customer base.
The ideal candidate will be experienced in sales and customer service. We expect you to be a reliable professional, able to balance customer orientation and a results-driven approach.
Your overarching goal is to identify opportunities with prospects and new clients and build them into long-term profitable relationships.













