
MANDATORY:
- Super Quality Data Architect, Data Engineering Manager / Director Profile
- Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
- Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
- Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
- Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
- Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
- Product Companies (Prefers high-scale, data-heavy companies)
PREFERRED:
- Must be from Tier - 1 Colleges, preferred IIT
- Candidates must have spent a minimum 3 yrs in each company.
- Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company
ROLES & RESPONSIBILITIES:
- Lead and mentor a team of data engineers, ensuring high performance and career growth.
- Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
- Drive the development and implementation of data governance frameworks and best practices.
- Work closely with cross-functional teams to define and execute a data roadmap.
- Optimize data processing workflows for performance and cost efficiency.
- Ensure data security, compliance, and quality across all data platforms.
- Foster a culture of innovation and technical excellence within the data team.
IDEAL CANDIDATE:
- 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
- Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
- Proficiency in SQL, Python, and Scala for data processing and analytics.
- Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
- Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
- Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
- Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
- Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
- Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
- Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
- Proven ability to drive technical strategy and align it with business objectives.
- Strong leadership, communication, and stakeholder management skills.
PREFERRED QUALIFICATIONS:
- Experience in machine learning infrastructure or MLOps is a plus.
- Exposure to real-time data processing and analytics.
- Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
- Prior experience in a SaaS or high-growth tech company.

Similar jobs
Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Job description:
Must have basic knowledge of Software Technologies like PHP, JavaScript, etc.
The ideal candidate will be responsible for planning, coordinating, and implementing projects within the decided-upon budget, timeline, and scope. They will also effectively monitor and present project updates to relevant stakeholders, clients, or project team members.
Responsibilities:
1. Set project timeline
2. Monitor project deliverables
3. Monitor project baseline to ensure activities progressing as planned
4. Monitor project burndown charts
5. Monitor employee's performance with KPI's
6. Update relevant stakeholders or team members on the project progress
7. Coach and support project team members with tasks you assign them
Skills Required:
- Good experience with programming language Python
- Strong experience in Docker.
- Good knowledge with any of the Cloud Platform like Azure.
- Must be comfortable working in a Linux environment.
- Must have exposure into IOT domain and its protocols ((Zigbee & BLE ,LoRa,Modbus)
- Must be a good team player.
- Strong Communication Skills
Designation – Technical Specialist L-1
Employment Type – Permanent
Education – Technical Graduate |Graduate / 12th+ Diploma.
Experience – 3 to 5 Years
Location – Bandra, Mumbai
Working – Monday to Friday
Joining – Immediately
Essential Duties and Responsibilities: (Additional duties may be assigned as required)
- Troubleshoot issues in support of production outages within SLA/escalation guidelines.
- Consult with customers to understand the issues.
- Proactively monitor environments and respond to alerts of systems and issues.
- Having the ability to accurately document all actions taken is key to providing the best support possible.
- Capable of performing in a fast-paced technical environment. Continually identify efficiencies for systems, processes, and procedures.
Required Skills:
- Experience with Proxy
- Understanding of and Networking concepts.
- Basic Knowledge of Network terminologies (IPS/ IDS, eb Proxy, microsegmentation, DDos)
- -Basic Knowledge of Security terminologies
Roles & Responsibilities
1. Recruitment
2. Joining & onboarding
3. Engagement
4. Performance Review
5. Handling of portals.
Required Candidate profile
* Graduate.
* Good communication skills and comfortable working 6 days per week
* Should be able to work independently with guidance.
* Immediate Starters
• Minimum 5+ year of experience in Magento Development with In-depth knowledge of Magento framework, Frontend Architecture, Themes, Modules, Functionality, and Configuration.
• In-depth knowledge of Magento’s code structure, extension architecture, theming hierarchy, and fallback components
• Strong knowledge in Implementing API services like REST, SOAP
• Must have the ability to develop Magento Modules, Themes, UI Component/Widget, and customizing existing themes/modules.
• Good understanding of the Magento themes, layout, and templating systems.
• Experience in Less and Grunt Workflow.
• Experience in Knockout, Require JS, and Underscore.
• Experience in Customizing Magento jQuery Widgets
• Knowledge of HTML/ CSS, JS frameworks like Bootstrap
• Experience working in Magento 2.0
- Gathering project requirements from customers and supporting their requests.
- Creating project estimates and scoping the solution based on clients’ requirements.
- Delivery on key project milestones in line with project Plan/ Budget.
- Establishing individual project plans and working with the team in prioritizing production schedules.
- Communication of milestones with the team and to clients via scheduled work-in-progress meetings
- Designing and documenting product requirements.
- Possess good analytical skills - detail-orientemd
- Be familiar with Microsoft applications and working knowledge of MS Excel
- Knowledge of MIS Reports & Dashboards
- Maintaining strong customer relationships with a positive, can-do attitude
Qualifications:
- Bachelor’s degree in management, finance or computer sciences or equivalent experience
- 6+ years of IT management experience, preferably within the small and medium business market.
- 8+ year in Software/SAAS business
- Project management experience preferred
Responsibilities:
- Setting a vision for how technology will be used in the company.
- Product lifecycle from architecture perspective
- Ensuring that the technological resources meet the company's short and long-term needs.
- Creating timelines for the development and deployment of all technological services
- Making executive decisions on behalf of the company's technological requirements
- Acting as a mentor to team members
- Maintaining a consumer-focused outlook and aiding in the delivering of IT projects to market
- Staying on top of technology trends and developments
- Powerful coder with an expertise in PhP, Laravel, Javascript, Architecture Design, Database Design and having agile mindset
- Must have strong knowledge in AWS infrastructure
Desired Skills:
- Communication
- Planning & Execution
- Problem Solving
- Project Management
- Quality Management
- Knowledge of deployment tasks such as on site deliveries, installation, acceptance
- Risk assessment
- Decision Making
- Customer focus
The job responsibilities include:
- Assist in establishing email marketing for our brand.
- Create automations and user journeys
- Manage day-to-day email production including campaign creation, QA and deployment.
- Assist in management of the growth and maintenance of the email database – including new sign-ups and opt-outs
- Help manage reporting on email metrics and produce insights to ensure our program continuously improves
- Manage email production with internal and external resources
- Participate in the production of all email initiatives from conception to deployment
- Assist in copywriting and editing of email content
- Perform daily engagement activities on social media channels
- Create a community of pet owners & enthusiasts across social media channels
- Responsible for organic growth of social media outreach
- Bring in fruitful PR collaboration & engagement ideas to build a recall value of the brand digitally
Role and Responsibilities
- Execute data mining projects, training and deploying models over a typical duration of 2 -12 months.
- The ideal candidate should be able to innovate, analyze the customer requirement, develop a solution in the time box of the project plan, execute and deploy the solution.
- Integrate the data mining projects embedded data mining applications in the FogHorn platform (on Docker or Android).
Core Qualifications
Candidates must meet ALL of the following qualifications:
- Have analyzed, trained and deployed at least three data mining models in the past. If the candidate did not directly deploy their own models, they will have worked with others who have put their models into production. The models should have been validated as robust over at least an initial time period.
- Three years of industry work experience, developing data mining models which were deployed and used.
- Programming experience in Python is core using data mining related libraries like Scikit-Learn. Other relevant Python mining libraries include NumPy, SciPy and Pandas.
- Data mining algorithm experience in at least 3 algorithms across: prediction (statistical regression, neural nets, deep learning, decision trees, SVM, ensembles), clustering (k-means, DBSCAN or other) or Bayesian networks
Bonus Qualifications
Any of the following extra qualifications will make a candidate more competitive:
- Soft Skills
- Sets expectations, develops project plans and meets expectations.
- Experience adapting technical dialogue to the right level for the audience (i.e. executives) or specific jargon for a given vertical market and job function.
- Technical skills
- Commonly, candidates have a MS or Ph.D. in Computer Science, Math, Statistics or an engineering technical discipline. BS candidates with experience are considered.
- Have managed past models in production over their full life cycle until model replacement is needed. Have developed automated model refreshing on newer data. Have developed frameworks for model automation as a prototype for product.
- Training or experience in Deep Learning, such as TensorFlow, Keras, convolutional neural networks (CNN) or Long Short Term Memory (LSTM) neural network architectures. If you don’t have deep learning experience, we will train you on the job.
- Shrinking deep learning models, optimizing to speed up execution time of scoring or inference.
- OpenCV or other image processing tools or libraries
- Cloud computing: Google Cloud, Amazon AWS or Microsoft Azure. We have integration with Google Cloud and are working on other integrations.
- Decision trees like XGBoost or Random Forests is helpful.
- Complex Event Processing (CEP) or other streaming data as a data source for data mining analysis
- Time series algorithms from ARIMA to LSTM to Digital Signal Processing (DSP).
- Bayesian Networks (BN), a.k.a. Bayesian Belief Networks (BBN) or Graphical Belief Networks (GBN)
- Experience with PMML is of interest (see www.DMG.org).
- Vertical experience in Industrial Internet of Things (IoT) applications:
- Energy: Oil and Gas, Wind Turbines
- Manufacturing: Motors, chemical processes, tools, automotive
- Smart Cities: Elevators, cameras on population or cars, power grid
- Transportation: Cars, truck fleets, trains
About FogHorn Systems
FogHorn is a leading developer of “edge intelligence” software for industrial and commercial IoT application solutions. FogHorn’s Lightning software platform brings the power of advanced analytics and machine learning to the on-premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. FogHorn’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as Smart Grid, Smart City, Smart Building and connected vehicle applications.
Press: https://www.foghorn.io/press-room/">https://www.foghorn.io/press-room/
Awards: https://www.foghorn.io/awards-and-recognition/">https://www.foghorn.io/awards-and-recognition/
- 2019 Edge Computing Company of the Year – Compass Intelligence
- 2019 Internet of Things 50: 10 Coolest Industrial IoT Companies – CRN
- 2018 IoT Planforms Leadership Award & Edge Computing Excellence – IoT Evolution World Magazine
- 2018 10 Hot IoT Startups to Watch – Network World. (Gartner estimated 20 billion connected things in use worldwide by 2020)
- 2018 Winner in Artificial Intelligence and Machine Learning – Globe Awards
- 2018 Ten Edge Computing Vendors to Watch – ZDNet & 451 Research
- 2018 The 10 Most Innovative AI Solution Providers – Insights Success
- 2018 The AI 100 – CB Insights
- 2017 Cool Vendor in IoT Edge Computing – Gartner
- 2017 20 Most Promising AI Service Providers – CIO Review
Our Series A round was for $15 million. Our Series B round was for $30 million October 2017. Investors include: Saudi Aramco Energy Ventures, Intel Capital, GE, Dell, Bosch, Honeywell and The Hive.
About the Data Science Solutions team
In 2018, our Data Science Solutions team grew from 4 to 9. We are growing again from 11. We work on revenue generating projects for clients, such as predictive maintenance, time to failure, manufacturing defects. About half of our projects have been related to vision recognition or deep learning. We are not only working on consulting projects but developing vertical solution applications that run on our Lightning platform, with embedded data mining.
Our data scientists like our team because:
- We care about “best practices”
- Have a direct impact on the company’s revenue
- Give or receive mentoring as part of the collaborative process
- Questions and challenging the status quo with data is safe
- Intellectual curiosity balanced with humility
- Present papers or projects in our “Thought Leadership” meeting series, to support continuous learning










