We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.
- 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
- Experience using Python to automate ETL/Data Processes jobs.
- Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
- Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
- Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
- Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
- Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
- Solid experience with data modeling, business logic, and RESTful APIs.
- Solid experience in the Linux environment.
- Experience with NoSQL / PostgreSQL preferred
- Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
- Experience with NGINX and SSL.
- Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
About NeenOpal Intelligent Solutions Private Limited
Similar jobs
Responsibilities
Researches, develops and maintains machine learning and statistical models for
business requirements
Work across the spectrum of statistical modelling including supervised,
unsupervised, & deep learning techniques to apply the right level of solution to
the right problem Coordinate with different functional teams to monitor outcomes and refine/
improve the machine learning models Implements models to uncover patterns and predictions creating business value and innovation
Identify unexplored data opportunities for the business to unlock and maximize
the potential of digital data within the organization
Develop NLP concepts and algorithms to classify and summarize structured/unstructured text data
Qualifications
3+ years of experience solving complex business problems using machine
learning.
Fluency in programming languages such as Python, NLP and Bert, is a must
Strong analytical and critical thinking skills
Experience in building production quality models using state-of-the-art technologies
Familiarity with databases .
desirable Ability to collaborate on projects and work independently when required.
Previous experience in Fintech/payments domain is a bonus
You should have Bachelor’s or Master’s degree in Computer Science, Statistics
or Mathematics or another quantitative field from a top tier Institute
● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.
● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypotheses through theoretical and empirical approaches.
● Productize has proven or working models into production-quality code.
● Collaborate with product management, marketing, and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions
● Stay current with the latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision-makers.
● File patents for innovative solutions that add to the company's IP portfolio
Requirements
● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.
● BS/MS/Ph.D. in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes ( only IITs / IISc / BITS / Top NITs or top US university
should apply)
● Experience in productizing models to code in a fast-paced start-up
environment.
● Fluency in analytical tools such as Matlab, R, Weka etc.
● Strong intuition for data and Keen aptitude on large scale data analysis
● Strong communication and collaboration skills.
Job Sector: IT, Software
Job Type: Permanent
Location: Chennai
Experience: 10 - 20 Years
Salary: 12 – 40 LPA
Education: Any Graduate
Notice Period: Immediate
Key Skills: Python, Spark, AWS, SQL, PySpark
Contact at triple eight two zero nine four two double seven
Job Description:
Requirements
- Minimum 12 years experience
- In depth understanding and knowledge on distributed computing with spark.
- Deep understanding of Spark Architecture and internals
- Proven experience in data ingestion, data integration and data analytics with spark, preferably PySpark.
- Expertise in ETL processes, data warehousing and data lakes.
- Hands on with python for Big data and analytics.
- Hands on in agile scrum model is an added advantage.
- Knowledge on CI/CD and orchestration tools is desirable.
- AWS S3, Redshift, Lambda knowledge is preferred
Experience Range |
2 Years - 10 Years |
Function | Information Technology |
Desired Skills |
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
|
Education Type | Engineering |
Degree / Diploma | Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering |
Specialization / Subject | Any Specialisation |
Job Type | Full Time |
Job ID | 000018 |
Department | Software Development |
Big Data Engineer: 5+ yrs.
Immediate Joiner
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
- We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.
• Help build a Data Science team which will be engaged in researching, designing,
implementing, and deploying full-stack scalable data analytics vision and machine learning
solutions to challenge various business issues.
• Modelling complex algorithms, discovering insights and identifying business
opportunities through the use of algorithmic, statistical, visualization, and mining techniques
• Translates business requirements into quick prototypes and enable the
development of big data capabilities driving business outcomes
• Responsible for data governance and defining data collection and collation
guidelines.
• Must be able to advice, guide and train other junior data engineers in their job.
Must Have:
• 4+ experience in a leadership role as a Data Scientist
• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)
• Willing to work from scratch and build up a team of Data Scientists
• Open for taking up the challenges with end to end ownership
• Confident with excellent communication skills along with a good decision maker
We are a nascent quantitative hedge fund led by an MIT PhD and Math Olympiad medallist, offering opportunities to grow with us as we build out the team. Our fund has world class investors and big data experts as part of the GP, top-notch ML experts as advisers to the fund, plus has equity funding to grow the team, license data and scale the data processing.
We are interested in researching and taking in live a variety of quantitative strategies based on historic and live market data, alternative datasets, social media data (both audio and video) and stock fundamental data.
You would join, and, if qualified, lead a growing team of data scientists and researchers, and be responsible for a complete lifecycle of quantitative strategy implementation and trading.
Requirements:
- Atleast 3 years of relevant ML experience
- Graduation date : 2018 and earlier
- 3-5 years of experience in high level Python programming.
- Master Degree (or Phd) in quantitative disciplines such as Statistics, Mathematics, Physics, Computer Science in top universities.
- Good knowledge of applied and theoretical statistics, linear algebra and machine learning techniques.
- Ability to leverage financial and statistical insights to research, explore and harness a large collection of quantitative strategies and financial datasets in order to build strong predictive models.
- Should take ownership for the research, design, development and implementation of the strategy development and effectively communicate with other team mates
- Prior experience and good knowledge of lifecycle and pitfalls of algorithmic strategy development and modelling.
- Good practical knowledge in understanding financial statements, value investing, portfolio and risk management techniques.
- A proven ability to lead and drive innovation to solve challenges and road blocks in project completion.
- A valid Github profile with some activity in it
Bonus to have:
- Experience in storing and retrieving data from large and complex time series databases
- Very good practical knowledge on time-series modelling and forecasting (ARIMA, ARCH and Stochastic modelling)
- Prior experience in optimizing and back testing quantitative strategies, doing return and risk attribution, feature/factor evaluation.
- Knowledge of AWS/Cloud ecosystem is an added plus (EC2s, Lambda, EKS, Sagemaker etc.)
- Knowledge of REST APIs and data extracting and cleaning techniques
- Good to have experience in Pyspark or any other big data programming/parallel computing
- Familiarity with derivatives, knowledge in multiple asset classes along with Equities.
- Any progress towards CFA or FRM is a bonus
- Average tenure of atleast 1.5 years in a company
Roles & Responsibilities
- Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
- Deep understanding of Linux from kernel mechanisms through user space management
- Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
- Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards. Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure.
- Wide understanding of IP networking as well as data centre infrastructure
Skills
- Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
- Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
- Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
- Strong understanding and must have experience:
- Apache spark framework, specifically spark core and spark streaming,
- Orchestration platforms, mesos and kubernetes,
- Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
- Core presentation technologies kibana, and grafana.
- Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products
Certification
Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms
The person holding this position is responsible for leading the solution development and implementing advanced analytical approaches across a variety of industries in the supply chain domain.
At this position you act as an interface between the delivery team and the supply chain team, effectively understanding the client business and supply chain.
Candidates will be expected to lead projects across several areas such as
- Demand forecasting
- Inventory management
- Simulation & Mathematical optimization models.
- Procurement analytics
- Distribution/Logistics planning
- Network planning and optimization
Qualification and Experience
- 4+ years of analytics experience in supply chain – preferable industries hi-tech, consumer technology, CPG, automobile, retail or e-commerce supply chain.
- Master in Statistics/Economics or MBA or M. Sc./M. Tech with Operations Research/Industrial Engineering/Supply Chain
- Hands-on experience in delivery of projects using statistical modelling
Skills / Knowledge
- Hands on experience in statistical modelling software such as R/ Python and SQL.
- Experience in advanced analytics / Statistical techniques – Regression, Decision tress, Ensemble machine learning algorithms etc. will be considered as an added advantage.
- Highly proficient with Excel, PowerPoint and Word applications.
- APICS-CSCP or PMP certification will be added advantage
- Strong knowledge of supply chain management
- Working knowledge on the linear/nonlinear optimization
- Ability to structure problems through a data driven decision-making process.
- Excellent project management skills, including time and risk management and project structuring.
- Ability to identify and draw on leading-edge analytical tools and techniques to develop creative approaches and new insights to business issues through data analysis.
- Ability to liaison effectively with multiple stakeholders and functional disciplines.
- Experience in Optimization tools like Cplex, ILOG, GAMS will be an added advantage.