7+ Internet of Things (IOT) Jobs in Kolkata | Internet of Things (IOT) Job openings in Kolkata
Apply to 7+ Internet of Things (IOT) Jobs in Kolkata on CutShort.io. Explore the latest Internet of Things (IOT) Job opportunities across top companies like Google, Amazon & Adobe.
We are a rapidly expanding global technology partner, that is looking for a highly skilled Senior (Python) Data Engineer to join their exceptional Technology and Development team. The role is in Kolkata. If you are passionate about demonstrating your expertise and thrive on collaborating with a group of talented engineers, then this role was made for you!
At the heart of technology innovation, our client specializes in delivering cutting-edge solutions to clients across a wide array of sectors. With a strategic focus on finance, banking, and corporate verticals, they have earned a stellar reputation for their commitment to excellence in every project they undertake.
We are searching for a senior engineer to strengthen their global projects team. They seek an experienced Senior Data Engineer with a strong background in building Extract, Transform, Load (ETL) processes and a deep understanding of AWS serverless cloud environments.
As a vital member of the data engineering team, you will play a critical role in designing, developing, and maintaining data pipelines that facilitate data ingestion, transformation, and storage for our organization.
Your expertise will contribute to the foundation of our data infrastructure, enabling data-driven decision-making and analytics.
- ETL Pipeline Development: Design, develop, and maintain ETL processes using Python, AWS Glue, or other serverless technologies to ingest data from various sources (databases, APIs, files), transform it into a usable format, and load it into data warehouses or data lakes.
- AWS Serverless Expertise: Leverage AWS services such as AWS Lambda, AWS Step Functions, AWS Glue, AWS S3, and AWS Redshift to build serverless data pipelines that are scalable, reliable, and cost-effective.
- Data Modeling: Collaborate with data scientists and analysts to understand data requirements and design appropriate data models, ensuring data is structured optimally for analytical purposes.
- Data Quality Assurance: Implement data validation and quality checks within ETL pipelines to ensure data accuracy, completeness, and consistency.
- Performance Optimization: Continuously optimize ETL processes for efficiency, performance, and scalability, monitoring and troubleshooting any bottlenecks or issues that may arise.
- Documentation: Maintain comprehensive documentation of ETL processes, data lineage, and system architecture to ensure knowledge sharing and compliance with best practices.
- Security and Compliance: Implement data security measures, encryption, and compliance standards (e.g., GDPR, HIPAA) as required for sensitive data handling.
- Monitoring and Logging: Set up monitoring, alerting, and logging systems to proactively identify and resolve data pipeline issues.
- Collaboration: Work closely with cross-functional teams, including data scientists, data analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions.
- Continuous Learning: Stay current with industry trends, emerging technologies, and best practices in data engineering and cloud computing and apply them to enhance existing processes.
- Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
- Proven experience as a Data Engineer with a focus on ETL pipeline development.
- Strong proficiency in Python programming.
- In-depth knowledge of AWS serverless technologies and services.
- Familiarity with data warehousing concepts and tools (e.g., Redshift, Snowflake).
- Experience with version control systems (e.g., Git).
- Strong SQL skills for data extraction and transformation.
- Excellent problem-solving and troubleshooting abilities.
- Ability to work independently and collaboratively in a team environment.
- Effective communication skills for articulating technical concepts to non-technical stakeholders.
- Certifications such as AWS Certified Data Analytics - Specialty or AWS Certified DevOps Engineer are a plus.
- Knowledge of data orchestration and workflow management tools
- Familiarity with data visualization tools (e.g., Tableau, Power BI).
- Previous experience in industries with strict data compliance requirements (e.g., insurance, finance) is beneficial.
What You Can Expect:
- Innovation Abounds: Join a company that constantly pushes the boundaries of technology and encourages creative thinking. Your ideas and expertise will be valued and put to work in pioneering solutions.
- Collaborative Excellence: Be part of a team of engineers who are as passionate and skilled as you are. Together, you'll tackle challenging projects, learn from each other, and achieve remarkable results.
- Global Impact: Contribute to projects with a global reach and make a tangible difference. Your work will shape the future of technology in finance, banking, and corporate sectors.
They offer an exciting and professional environment with great career and growth opportunities. Their office is located in the heart of Salt Lake Sector V, offering a terrific workspace that's both accessible and inspiring. Their team members enjoy a professional work environment with regular team outings. Joining the team means becoming part of a vibrant and dynamic team where your skills will be valued, your creativity will be nurtured, and your contributions will make a difference. In this role, you can work alongside some of the brightest minds in the industry.
If you're ready to take your career to the next level and be part of a dynamic team that's driving innovation on a global scale, we want to hear from you.
Apply today for more information about this exciting opportunity.
Onsite Location: Kolkata, India (Salt Lake Sector V)
We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
- Overall 3 to 5 years of experience in designing and implementing complex large scale Software.
- Good in Python is must.
- Experience in Apache Spark, Scala, Java and Delta Lake
- Experience in designing and implementing templated ETL/ELT data pipelines
- Expert level experience in Data Pipeline Orchestrationusing Apache Airflow for large scale production deployment
- Experience in visualizing data from various tasks in the data pipeline using Apache Zeppelin/Plotly or any other visualization library.
- Log management and log monitoring using ELK/Grafana
- Git Hub Integration
Technology Stack: Apache Spark, Apache Airflow, Python, AWS, EC2, S3, Kubernetes, ELK, Grafana , Apache Arrow, Java