Job Description
▪ You are responsible for setting up, operating, and monitoring LS system solutions on premise and in the cloud
▪ You are responsible for the analysis and long-term elimination of system errors
▪ You provide support in the area of information and IT security
▪ You will work on the strategic further development and optimize the platform used
▪ You will work in a global, international team requirement profile
▪ You have successfully completed an apprenticeship / degree in the field of IT
▪ You can demonstrate in-depth knowledge and experience in the following areas:
▪ PostgreSQL databases
▪ Linux (e.g. Ubuntu, Oracle Linux, RHEL)
▪ Windows (e.g. Windows Server 2019/2022)
▪ Automation / IaC (e.g. Ansible, Terraform)
▪ Containerization with Kubernetes / Virtualization with Vmware is an advantage
▪ Service APIs (AWS, Azure)
▪ You have very good knowledge of English, knowledge of German is an advantage
▪ You are a born team player, show high commitment and are resilient
About LS Spectrum Solutions Private Limited
Similar jobs
Responsibilities :
- Involve in planning, design, development and maintenance of large-scale data repositories, pipelines, analytical solutions and knowledge management strategy
- Build and maintain optimal data pipeline architecture to ensure scalability, connect operational systems data for analytics and business intelligence (BI) systems
- Build data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
- Reporting and obtaining insights from large data chunks on import/export and communicating relevant pointers for helping in decision-making
- Preparation, analysis, and presentation of reports to the management for further developmental activities
- Anticipate, identify and solve issues concerning data management to improve data quality
Requirements :
- Ability to build and maintain ETL pipelines
- Technical Business Analysis experience and hands-on experience developing functional spec
- Good understanding of Data Engineering principles including data modeling methodologies
- Sound understanding of PostgreSQL
- Strong analytical and interpersonal skills as well as reporting capabilities
We have an urgent requirements of Big Data Developer profiles in our reputed MNC company.
Location: Pune/Bangalore/Hyderabad/Nagpur
Experience: 4-9yrs
Skills: Pyspark,AWS
or Spark,Scala,AWS
or Python Aws
The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).
The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:
• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required
This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.
Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
We are looking for an experienced MLOps Engineer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be a key member of a team in decision making, implementations, development and advancement of ML operations of the core AI platform.
Roles and Responsibilities:
- Work closely with a cross functional team to serve business goals and objectives.
- Develop, Implement and Manage MLOps in cloud infrastructure for data preparation,deployment, monitoring and retraining models
- Design and build application containerisation and orchestrate with Docker and Kubernetes in AWS platform.
- Build and maintain code, tools, packages in cloud
Requirements:
- At Least 2+ years of experience in Data engineering
- At Least 3+ yr experience in Python with familiarity in popular ML libraries.
- At Least 2+ years experience in model serving and pipelines
- Working knowledge of containers like kubernetes , dockers, in AWS
- Design distributed systems deployment at scale
- Hands-on experience in coding and scripting
- Ability to write effective scalable and modular code.
- Familiarity with Git workflows, CI CD and NoSQL Mongodb
- Familiarity with Airflow, DVC and MLflow is a plus
We at Datametica Solutions Private Limited are looking for an SQL Lead / Architect who has a passion for the cloud with knowledge of different on-premises and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description :
Experience: 6+ Years
Work Location: Pune / Hyderabad
Technical Skills :
- Good programming experience as an Oracle PL/SQL, MySQL, and SQL Server Developer
- Knowledge of database performance tuning techniques
- Rich experience in a database development
- Experience in Designing and Implementation Business Applications using the Oracle Relational Database Management System
- Experience in developing complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL
Required Candidate Profile :
- Excellent communication, interpersonal, analytical skills and strong ability to drive teams
- Analyzes data requirements and data dictionary for moderate to complex projects • Leads data model related analysis discussions while collaborating with Application Development teams, Business Analysts, and Data Analysts during joint requirements analysis sessions
- Translate business requirements into technical specifications with an emphasis on highly available and scalable global solutions
- Stakeholder management and client engagement skills
- Strong communication skills (written and verbal)
About Us!
A global leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle Data warehouse Assessment & Migration Planning Product
Raven Automated Workload Conversion Product
Pelican Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live, and learn new things. We believe in building a culture of innovation, growth, and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com
Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)
Salary: Competitive as per Industry Standard
We are expanding our Data Engineering Team and hiring passionate professionals with extensive
knowledge and experience in building and managing large enterprise data and analytics platforms. We
are looking for creative individuals with strong programming skills, who can understand complex
business and architectural problems and develop solutions. The individual will work closely with the rest
of our data engineering and data science team in implementing and managing Scalable Smart Data
Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale
Processing Clusters, Data Mining and Search Engines.
What You’ll Need:
- 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal
Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied
data types.
- Proficiency in Python, Linux and shell scripting.
- Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
● Strong experience in developing the infrastructure required for data ingestion, optimal
extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory, Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).
- Working knowledge of github or other version control tools.
- Experience with creating Restful web services and API platforms.
- Work with data science and infrastructure team members to implement practical machine
learning solutions and pipelines in production.
- Experience with cloud providers like Azure/AWS/GCP.
- Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
- Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
- Strong analytic skills related to working with unstructured datasets.
Good to have (to filter or prioritize candidates)
- Experience with testing libraries such as pytest for writing unit-tests for the developed code.
- Knowledge of Machine Learning algorithms and libraries would be good to have,
implementation experience would be an added advantage.
- Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
- Knowledge of Azure functions , Elastic search etc will be good to have.
- Having experience with model versioning (mlflow) and data versioning will be beneficial
- Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.
Preferred Education & Experience:
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 5+ years of hands-on demonstrable experience with:
▪ Data Analysis & Data Modeling
▪ Database Design & Implementation
▪ Database Performance Tuning & Optimization
▪ PL/pgSQL & SQL -
5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).
-
5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.
-
Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels
-
Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.
-
Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values
-
Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus
-
Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.
Hi All,
We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.
Strong Knowledge of SCCM, App V, and Intune infrastructure.
Powershell/VBScript/Python,
Windows Installer
Knowledge of Windows 10 registry
Application Repackaging
Application Sequencing with App-v
Deploying and troubleshooting applications, packages, and Task Sequences.
Security patch deployment and remediation
Windows operating system patching and defender updates
Thanks,
Mohan.G
Intro
Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.
What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.
• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.
- Create and maintain optimal data pipeline architecture and ETL processes
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Develop data pipeline and infrastructure to support real-time decisions
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale
- Proficiency in writing and debugging complex SQLs
- Experience working with AWS big data tools
• Ability to lead the project and implement best data practises and technology
Data Pipelining
- Strong command in building & optimizing data pipelines, architectures and data sets
- Strong command on relational SQL & noSQL databases including Postgres
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Big Data: Strong experience in big data tools & applications
- Tools: Hadoop, Spark, HDFS etc
- AWS cloud services: EC2, EMR, RDS, Redshift
- Stream-processing systems: Storm, Spark-Streaming, Flink etc.
- Message queuing: RabbitMQ, Spark etc
Software Development & Debugging
- Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
- Strong hold on data structures & algorithms
What would be a bonus
- Prior experience working in a fast-growth Startup
- Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data