
Big Data Engineer
at Altimetrik
Experience in developing lambda functions with AWS Lambda
Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
Should be able to code in Python and Scala.
Snowflake experience will be a plus

Similar jobs
🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)
Experience: 4–8 Years
Location: Bengaluru (On-site / Hybrid)
Company: Publicly Listed, Global Product Platform
🧠 About the Mission
We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.
This is not incremental improvement.
This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.
We are re-architecting:
- Legacy systems → AI-native architectures
- Static pipelines → autonomous, self-healing systems
- Data platforms → intelligent, learning systems
- Software workflows → agentic execution layers
This is the kind of shift you would expect from companies like Google or Microsoft —
Except here, you will build it from day zero and scale it globally.
🧠 The Opportunity: This role sits at the intersection of three high-impact domains:
1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI
2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines
3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure
We are building systems where:
- Data platforms optimize themselves using ML/LLMs
- Pipelines are autonomous, self-healing, and adaptive
- Queries are generated, optimized, and executed intelligently
- Infrastructure learns from usage and evolves continuously
This is: AI as the control plane for data infrastructure
🧩 What You’ll Work On
You will design and build AI-native systems deeply embedded inside data infrastructure.
1. AI-Native Data Platforms
- Build LLM-powered interfaces:
- Natural language → SQL / pipelines / transformations
- Design semantic data layers:
- Embeddings, vector search, knowledge graphs
- Develop AI copilots:
- For data engineers, analysts, and platform users
2. Autonomous Data Pipelines
- Build self-healing ETL/ELT systems using AI agents
- Create pipelines that:
- Detect anomalies in real time
- Automatically debug failures
- Dynamically optimize transformations
3. Intelligent Query & Compute Optimization
- Apply ML/LLMs to:
- Query planning and execution
- Cost-based optimization using learned models
- Workload prediction and scheduling
- Build systems that:
- Learn from query patterns
- Continuously improve performance and cost efficiency
4. Distributed Data + AI Infrastructure
- Architect systems operating at:
- Billions of events per day
- Petabyte-scale data
- Work with:
- Distributed compute engines (Spark / Flink / Ray class systems)
- Streaming systems (Kafka-class infra)
- Vector databases and hybrid retrieval systems
5. Learning Systems & Feedback Loops
- Build closed-loop AI systems:
- Execution → feedback → model updates
- Develop:
- Continual learning pipelines
- Online learning systems for infra optimization
- Experimentation frameworks (A/B, bandits, eval pipelines)
6. LLM & Agentic Systems (Infra-Aware)
- Build agents that understand data systems
- Enable:
- Autonomous pipeline debugging
- Root cause analysis for infra failures
- Intelligent orchestration of data workflows
🧠 What We’re Looking For
Core Foundations
- Strong grounding in:
- Machine Learning, Deep Learning, NLP
- Statistics, optimization, probabilistic systems
- Distributed systems fundamentals
- Deep understanding of:
- Transformer architectures
- Modern LLM ecosystems
Hands-On Expertise
- Experience building:
- LLM / GenAI systems (RAG, fine-tuning, embeddings)
- Data platforms (warehouse, lake, lakehouse architectures)
- Distributed pipelines and compute systems
- Strong programming skills:
- Python (ML/AI stack)
- SQL (deep understanding — query planning, optimization mindset)
Systems Thinking (Critical)
You think in systems, not components.
- Built or worked on:
- Large-scale data pipelines
- High-throughput distributed systems
- Low-latency, high-concurrency architectures
- Understand:
- Query optimization and execution
- Data partitioning, indexing, caching
- Trade-offs in distributed systems
🔥 What Sets You Apart (Top 1%)
- Built AI-powered data platforms or infra systems in production
- Designed or contributed to:
- Query engines / optimizers
- Data observability / lineage systems
- AI-driven infra or AIOps platforms
- Experience with:
- Multi-modal AI (logs, metrics, traces, text)
- Agentic AI systems
- Autonomous infrastructure
- Worked on systems at scale comparable to:
- Google (BigQuery-like systems)
- Meta (real-time analytics infra)
- Snowflake / Databricks (lakehouse architectures)
🧬 Ideal Background (Not Mandatory)
We often see strong candidates from:
- Data infrastructure or platform engineering teams
- AI-first startups or research-driven environments
- High-scale product companies
Experience building:
- Internal platforms used by 1000s of engineers
- Systems serving millions of users / high throughput workloads
- Multi-region, distributed cloud systems
🧠 The Kind of Problems You’ll Solve
- Can LLMs replace traditional query optimizers?
- How do we build self-healing data pipelines at scale?
- Can data systems learn from every query and improve automatically?
- How do we embed reasoning and planning into infrastructure layers?
- What does a fully autonomous data platform look like?
Background: We Commonly See (But Not Limited To)
Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:
- Leading engineering schools in India and globally
- Engineers with experience in top product companies, AI startups, or research-driven environments
- That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.
We are looking for passionate and motivated Developers to join our growing technical team. The ideal candidate should have strong foundational knowledge in Python/Django or React with Django and be eager to work on real-time web development projects.
Open Positions:
Python Django Developer
React + Django Developer
Key Responsibilities:
- Develop, test, and maintain scalable web applications.
- Write clean, efficient, and reusable code using Django and/or React.
- Collaborate with UI/UX designers and backend developers to implement new features.
- Debug, troubleshoot, and optimize application performance.
- Participate in code reviews and contribute to team discussions.
- Stay updated with the latest web development trends and technologies.
Requirements:
- Basic to strong knowledge of Python and Django framework.
- Familiarity with React.js (for React + Django role).
- Understanding of REST APIs and database concepts.
- Knowledge of HTML, CSS, and JavaScript.
- Strong problem-solving and logical thinking skills.
- Good communication and teamwork abilities.
- Freshers and career restart candidates are welcome to apply.
More Info:
Company: Altos Technologies
Website: www.altostechnologies.in
Job Type: Permanent Job
Industry: IT / Web Development
Function: Software Development
Employment Type: Full-time
Location: Kochi & Chennai
About the Role
We’re looking for an Elixir Developer who is passionate about building scalable, high performance backend systems. You’ll work closely with our engineering team to design, develop, and maintain reliable applications that power mission-critical systems.
Key Responsibilities
• Develop and maintain backend services using Elixir and Phoenix framework.
• Build scalable, fault-tolerant, and distributed systems.
• Integrate APIs, databases, and message queues for real-time applications.
• Optimize system performance and ensure low latency and high throughput.
• Collaborate with frontend, DevOps, and product teams to deliver seamless solutions.
• Write clean, maintainable, and testable code with proper documentation.
• Participate in code reviews, architectural discussions, and deployment automation.
Required Skills & Experience
• 2–4 years of hands-on experience in Elixir (or strong functional programming background).
• Experience with Phoenix, Ecto, and RESTful API development.
• Solid understanding of OTP (Open Telecom Platform) concepts like GenServer, Supervisors, etc.
• Proficiency in PostgreSQL, Redis, or similar databases.
• Familiarity with Docker, Kubernetes, or cloud platforms (AWS/GCP/Azure).
• Understanding of CI/CD pipelines, version control (Git), and agile development.
Good to Have
• Experience with microservices architecture or real-time data systems.
• Knowledge of GraphQL, LiveView, or PubSub.
• Exposure to performance profiling, observability, or monitoring tools.
- Key Technical Skills: Deep experience on Performance Engineering with understanding of Java/J2EE technologies.
- Experienced in defining and realizing end-to-end Technical Architecture for large scale real- time enterprise Ability to identify and define non-functional requirements and design systems to meet the same.
- Ability to review existing Architectures and identify Risks, Trade-offs, and share recommendations for addressing the identified
- Demonstrate strong understanding of cloud architecture considerations when scaling and tuning application deployments. Must have hands on experience working on any of the Cloud deployments on AWS or
- Good experience on leveraging APM tools to provide deep dive analysis on performance problems. Deep understanding of the dashboards which can be built for CIO level interactions. Must have relevant experience on APM tools like Dynatrace or AppDynamics.
- Experience in performance optimization of J2EE systems on any of different types of application servers - WebLogic, WebSphere, JBoss etc. Deep expertise in any one of the application servers is a
- Experience in creating and reviewing technical documents like Architecture blueprint, Design specifications, Deployment
- Experience on working on Performance Testing Projects. Fair understanding of Performance Testing tools - Apache JMeter /Gatling/ HP Load Runner for Load testing. Must be in a position to review Performance Testing programs and steer directions towards right Workload Model, appropriate Test and Monitoring Strategy, build performance models and derive at right Capacity Planning.
- Experience in Big Data Analytics like - Apache Kafka, Apache Storm, Apache Hadoop, Apache
- Good skills in RDBMS like: Oracle, MS-SQL, MySQL, Cassandra, and Mongo DB
- Exposure to Agile methodologies & Continuous Integration Tools
- Entrepreneur / Intrapreneur (someone who has built technology teams ground-up, built new solutions from scratch)
- Very sound understanding of technology and have a consultative
- Sound understanding of complex enterprise IT environment and issues faced by CIOs in the digital
- Excellent Pre-sales experience and have played a key role in winning business along with the sales
- Excellent communication, interpersonal, liaison and problem-solving skills with the ability to work in a multi-cultural environment
- Good negotiation skills
- Go getter and results oriented
- High energy level with ability to work well under pressure
- Good relationship building skills. Someone who enjoys CIOs trust and has an ability to develop relationships at all levels (technology teams) of the customer
- Developing new user-facing features using React.js
- Building reusable components and front-end libraries for future use
- Translating designs and wireframes into high-quality code
- Optimizing components for maximum performance across a vast array of web-capable devices and browsers
- Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model
- Thorough understanding of React.js and its core principles
- Experience with popular React.js workflows (Redux)
- Familiarity with newer specifications of ECMAScript
- Experience with data structure libraries (e.g., Immutable.js)
- Knowledge of isomorphic React is a plus
- Familiarity with RESTful APIs
- Knowledge of modern authorization mechanisms, such as JSON Web Token
- Familiarity with modern front-end build pipelines and tools
- Ability to understand business requirements and translate them into technical requirements
- A knack for benchmarking and optimization
- Familiarity with code versioning tools (such as Git, SVN, and Mercurial)
Requirements:
● Strong software engineering background, with good problem solving skills.
● Extremely self-motivated; able to identify opportunities for improvement and tackle
them, without external direction
● Experience with testing automation tools like Selenium/Appium, jUnit etc.
● Experience in testing Web Application and Mobile Web
● Experience in testing Native Mobile Applications(Android/Ios)
● Good working knowledge of scripting languages.
● Experience developing and debugging in Php, Python or Java.
● Basic understanding of linux systems and commands.
● Knowledge of relational databases/SQL
● Strong communication and documentation skills
● BE in Computer Science or equivalent work experience
Pluses:
● Understanding of continuous deployment techniques
Core Java developer responsible for building Java applications. This includes anything between complex groups of back-end services and their client-end (desktop and mobile) counterparts. Your primary responsibility will be to design and develop these applications, and to coordinate with the rest of the team working on different layers of the infrastructure. Thus, a commitment to collaborative problem solving, sophisticated design, and product quality are essential.
Responsibilities:
•Translate application storyboards and use cases into functional applications.
- Design, build and maintain efficient, reusable, and reliable Java code.
- Ensure the best possible performance, quality, and responsiveness of the applications.
- Identify bottlenecks and bugs and devise solutions to these problems.
- Help maintain code quality, organization, and automatization.
Skills Needed:
- Java, JPA, Servlets, JAX/RS, JUnit
- Fullstack : Node/Angular/React
- Algorithm,Design Patterns, Data Structures
- TomCat, Wildfly
- MySQL, PostgreSQL
- HTML, JavaScript, jQuery,
- Mobile exposure: Cordova/PhoneGap
- Exposure in E-Commerce or Product based domain
- NoSQL exposure
#Hurryup #Fresher #goodopportunity
We are urgently looking for #BackofficeExecutive in one of the well-known Authorized Dealer of Automobile & Heavy Equipment…
Experience :0 to 1yr
Gender: Male
Location: Morbi
ExtraBenefit: Stay Facility will be provided by company
requiredskills :
Email communication, Client coordination, Invoice generation, quotation making, Part selling, client visit for payment, Need guy who can do in-house selling of forklift parts, Send quotation to client through mail, Generate invoice, Take follow-ups for payment , Payment collection..
references are highly appreciated!
About the Role
Dremio’s user experience is one of its key differentiators and makes all your data easily accessible and shareable by your data consumers. UI Engineers at Dremio are responsible for the development of the user interface and user experience on Dremio’s Data Lake Engine.
Responsibilities and ownership
- Own the full cycle of development of our modern single page web application from inception, design, development, testing, and production.
- Care deeply about modular design patterns and frameworks to deliver an architecture that’s rooted in simplicity, that’s easy to iterate on and constantly evolve.
- Passionate about ease of use, experience and quality of the product.
Requirements
- 5+ years of experience working with JavaScript frameworks such as React, Angular.js, Angular, or Vue.js.
- 2 years minimum experience with React is highly preferred and currently utilizing React in their current job.
- Strong coding experience in JavaScript (or TypeScript), HTML, and CSS.
- Passion about UI development and UX design
- Shown proven success in delivering high-quality front end applications
- Fluency in the understanding of SQL and databases (relational or non-relational)
- B.S. or M.S in Computer Science in a relevant technical field or equivalent professional experience









