- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)
About 1CH
Similar jobs
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdata_analytics.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
A Delhi NCR based Applied AI & Consumer Tech company tackling one of the largest unsolved consumer internet problems of our time. We are a motley crew of smart, passionate and nice people who believe you can build a high performing company with a culture of respect aka a sports team with a heart aka a caring meritocracy.
Our illustrious angels include unicorn founders, serial entrepreneurs with exits, tech & consumer industry stalwarts and investment professionals/bankers.
We are hiring for our founding team (in Delhi NCR only, no remote) that will take the product from prototype to a landing! Opportunity for disproportionate non-linear impact, learning and wealth creation in a classic 0-1 with a Silicon Valley caliber founding team.
Key Responsibilities:
1. Data Strategy and Vision:
· Develop and drive the company's data analytics strategy, aligning it with overall business goals.
· Define the vision for data analytics, outlining clear objectives and key results (OKRs) to measure success.
2. Data Analysis and Interpretation:
· Oversee the analysis of complex datasets to extract valuable insights, trends, and patterns.
· Utilize statistical methods and data visualization techniques to present findings in a clear and compelling manner to both technical and non-technical stakeholders.
3. Data Infrastructure and Tools:
· Evaluate, select, and implement advanced analytics tools and platforms to enhance data processing and analysis capabilities.
· Collaborate with IT teams to ensure a robust and scalable data infrastructure, including data storage, retrieval, and security protocols.
4. Collaboration and Stakeholder Management:
· Collaborate cross-functionally with teams such as marketing, sales, and product development to identify opportunities for data-driven optimizations.
· Act as a liaison between technical and non-technical teams, ensuring effective communication of data insights and recommendations.
5. Performance Measurement:
· Establish key performance indicators (KPIs) and metrics to measure the impact of data analytics initiatives on business outcomes.
· Continuously assess and improve the accuracy and relevance of analytical models and methodologies.
Qualifications:
- Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or related field.
- Proven experience (5+ years) in data analytics, with a focus on leading analytics teams and driving strategic initiatives.
- Proficiency in data analysis tools such as Python, R, SQL, and advanced knowledge of data visualization tools.
- Strong understanding of statistical methods, machine learning algorithms, and predictive modelling techniques.
- Excellent communication skills, both written and verbal, to effectively convey complex findings to diverse audie
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fmachine_learning.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdata_science.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdeep_learning.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Skills: Machine Learning,Deep Learning,Artificial Intelligence,python.
Location:Chennai
Domain knowledge: Data cleaning, modelling, analytics, statistics, machine learning, AI
Requirements:
· To be part of Digital Manufacturing and Industrie 4.0 projects across Saint Gobain group of companies
· Design and develop AI//ML models to be deployed across SG factories
· Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required
· Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks
· Prior experience in developing AI and ML models is required
· Experience with data from the Manufacturing Industry would be a plus
Roles and Responsibilities:
· Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics
· Multitasking, good communication necessary
· Entrepreneurial attitude.
- Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
- programmatically ingesting data from several static and real-time sources (incl. web scraping)
- rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
- performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
- Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
- Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
- Build data tools to facilitate fast data cleaning and statistical analysis
- Ensure data architecture is secure and compliant
- Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
- Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).
You should be
- Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
- Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
- Expert in shell scripting and writing schedulers
- Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
- Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
- Strong knowledge of data security best practices
- 5+ years experience in a data engineering role
- Science / Engineering graduate from a Tier-1 university in the country
- And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdata_analytics.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Required Experience
· 3+ years of relevant technical experience as a data analyst role
· Intermediate / expert skills with SQL and basic statistics
· Experience in Advance SQL
· Python programming- Added advantage
· Strong problem solving and structuring skills
· Automation in connecting various sources to the data and representing it through various dashboards
· Excellent with Numbers and communicate data points through various reports/templates
· Ability to communicate effectively internally and outside Data Analytics team
· Proactively take up work responsibilities and take adhocs as and when needed
· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable
· Strong technical communication skills; both written and verbal
· Ability to understand and articulate the "big picture" and simplify complex ideas
· Ability to identify and learn applicable new techniques independently as needed
· Must have worked with various Databases (Relational and Non-Relational) and ETL processes
· Must have experience in handling large volume and data and adhere to optimization and performance standards
· Should have the ability to analyse and provide relationship views of the data from different angles
· Must have excellent Communication skills (written and oral).
· Knowing Data Science is an added advantage
Required Skills
MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills
Preferred Experience
· Strong understanding of relational database MY SQL etc.
· Prior experience working remotely full-time
· Prior Experience working in Advance SQL
· Experience with one or more BI tools, such as Superset, Tableau etc.
· High level of logical and mathematical ability in Problem Solving
Data Analyst
Job Description
Summary
Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on ground-breaking big data technologies? Then we are looking for you.
At Amagi, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Amagi’s Data Engineering and Business Intelligence team is looking for passionate, detail-oriented, technical savvy, energetic team members who like to think outside the box.
Amagi’s Data warehouse team deals with petabytes of data catering to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are an integral part of business functions such as Sales/Revenue, Operations, Finance, Marketing and Engineering, enabling critical business decisions. Designing, developing, scaling and running these big data technologies using native technologies of AWS and GCP are a core part of our daily job.
Key Qualifications
- Experience in building highly cost optimised data analytics solutions
- Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data
- Experience (hands on) in building high quality ETL applications, data pipelines and analytics solutions ensuring data privacy and regulatory compliance.
- Experience in working with AWS or GCP
- Experience with relational and NoSQL databases
- Experience to full stack web development (Preferably Python)
- Expertise with data visualisation systems such as Tableau and Quick Sight
- Proficiency in writing advanced SQL queries with expertise in performance tuning handling large data volumes
- Familiarity with ML/AÍ technologies is a plus
- Demonstrate strong understanding of development processes and agile methodologies
- Strong analytical and communication skills. Should be self-driven, highly motivated and ability to learn quickly
Description
Data Analytics is at the core of our work, and you will have the opportunity to:
- Design Data-warehousing solutions on Amazon S3 with Athena, Redshift, GCP Bigtable etc
- Lead quick prototypes by integrating data from multiple sources
- Do advanced Business Analytics through ad-hoc SQL queries
- Work on Sales Finance reporting solutions using tableau, HTML5, React applications
We build amazing experiences and create depth in knowledge for our internal teams and our leadership. Our team is a friendly bunch of people that help each other grow and have a passion for technology, R&D, modern tools and data science.
Our work relies on deep understanding of the company needs and an ability to go through vast amounts of internal data such as sales, KPIs, forecasts, Inventory etc. One of the key expectations of this role would be to do data analytics, building data lakes, end to end reporting solutions etc. If you have a passion for cost optimised analytics and data engineering and are eager to learn advanced data analytics at a large scale, this might just be the job for you..
Education & Experience
A bachelor’s/master’s degree in Computer Science with 5 to 7 years of experience and previous experience in data engineering is a plus.
- Proficient with SQL Server/T-SQL programming in creation and optimization of complex Stored Procedures, UDF, CTE and Triggers
- Overall Experience should be between 4 to 7 years
- Experience working in a data warehouse environment and a strong understanding of dimensional data modeling concepts. Experience in SQL server, DW principles and SSIS.
- Should have strong experience in building data transformations with SSIS including importing data from files, and moving data from source to destination.
- Creating new SSIS packages or modifying existing SSIS packages using SQL server
- Debug and fine-tune SSIS processes to ensure accurate and efficient movement of data. Experience with ETL testing & data validation.
- 1+ years of experience with Azure services like Azure Data Factory, Data flow, Azure blob Storage, etc.
- 1+ years of experience with developing Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime.
- Must be able to build Business Intelligence solutions in a collaborative, agile development environment.
- Reporting experience with Power BI or SSRS is a plus.
- Experience working on an Agile/Scrum team preferred.
- Proven strong problem-solving skills, troubleshooting, and root cause analysis.
- Excellent written and verbal communication skills.
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.
- 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
- Experience using Python to automate ETL/Data Processes jobs.
- Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
- Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
- Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
- Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
- Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
- Solid experience with data modeling, business logic, and RESTful APIs.
- Solid experience in the Linux environment.
- Experience with NoSQL / PostgreSQL preferred
- Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
- Experience with NGINX and SSL.
- Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
JD:
Required Skills:
- Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
- Strong practical knowledge of SQL.
Hands on experience on Spark/SparkSQL - Data Structure and Algorithms
- Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
- Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
- Experience on NoSQL Databases like HBase, etc
- Experience with Linux OS environment (Shell script, AWK, SED)
- Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
![icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fsearch.png&w=48&q=75)
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)