Responsibilities
- Interpret data, analyze results using statistical techniques and provide ongoing reports
- Develop and implement databases, data collection systems, data analytics, and other strategies that optimize statistical efficiency and quality
- Acquire data from primary or secondary data sources and maintain databases/data systems
- Identify, analyze, and interpret trends or patterns in complex data sets
- Filter and clean data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems
- Work with the teams to prioritize business and information needs
- Locate and define new process improvement opportunities
Requirements-
- Minimum 3 year of working experience as a Data Analyst or Business Data Analyst
- Technical expertise with data models, database design development, data mining, and segmentation techniques
- Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL, etc), programming (XML, JavaScript, or ETL frameworks)
- Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS, etc)
- Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
- Excellent written and verbal communication skills for coordinating across teams.
- A drive to learn and master new technologies and techniques.
Similar jobs
Data Analyst
Job Description
Summary
Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on ground-breaking big data technologies? Then we are looking for you.
At Amagi, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Amagi’s Data Engineering and Business Intelligence team is looking for passionate, detail-oriented, technical savvy, energetic team members who like to think outside the box.
Amagi’s Data warehouse team deals with petabytes of data catering to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are an integral part of business functions such as Sales/Revenue, Operations, Finance, Marketing and Engineering, enabling critical business decisions. Designing, developing, scaling and running these big data technologies using native technologies of AWS and GCP are a core part of our daily job.
Key Qualifications
- Experience in building highly cost optimised data analytics solutions
- Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data
- Experience (hands on) in building high quality ETL applications, data pipelines and analytics solutions ensuring data privacy and regulatory compliance.
- Experience in working with AWS or GCP
- Experience with relational and NoSQL databases
- Experience to full stack web development (Preferably Python)
- Expertise with data visualisation systems such as Tableau and Quick Sight
- Proficiency in writing advanced SQL queries with expertise in performance tuning handling large data volumes
- Familiarity with ML/AÍ technologies is a plus
- Demonstrate strong understanding of development processes and agile methodologies
- Strong analytical and communication skills. Should be self-driven, highly motivated and ability to learn quickly
Description
Data Analytics is at the core of our work, and you will have the opportunity to:
- Design Data-warehousing solutions on Amazon S3 with Athena, Redshift, GCP Bigtable etc
- Lead quick prototypes by integrating data from multiple sources
- Do advanced Business Analytics through ad-hoc SQL queries
- Work on Sales Finance reporting solutions using tableau, HTML5, React applications
We build amazing experiences and create depth in knowledge for our internal teams and our leadership. Our team is a friendly bunch of people that help each other grow and have a passion for technology, R&D, modern tools and data science.
Our work relies on deep understanding of the company needs and an ability to go through vast amounts of internal data such as sales, KPIs, forecasts, Inventory etc. One of the key expectations of this role would be to do data analytics, building data lakes, end to end reporting solutions etc. If you have a passion for cost optimised analytics and data engineering and are eager to learn advanced data analytics at a large scale, this might just be the job for you..
Education & Experience
A bachelor’s/master’s degree in Computer Science with 5 to 7 years of experience and previous experience in data engineering is a plus.
Job responsibilities
- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges
- You will collaborate with Data Scientists in order to design scalable implementations of their models
- You will pair to write clean and iterative code based on TDD
- Leverage various continuous delivery practices to deploy, support and operate data pipelines
- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
- Create data models and speak to the tradeoffs of different modeling approaches
- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
- Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
- Professional skills
- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
- An interest in coaching, sharing your experience and knowledge with teammates
- You enjoy influencing others and always advocate for technical excellence while being open to change when needed
- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
Work closely with Product Managers to drive product improvements through data driven decisions.
Conduct analysis to determine new project pilot settings, new features, user behaviour, and in-app behaviour.
Present insights and recommendations to leadership using high quality visualizations and concise messaging.
Own the implementation of data collection and tracking, and co-ordinate with engineering and product team.
Create and maintain dashboards for product and business teams.
Requirements
1+ years’ experience in analytics. Experience as Product analyst will be added advantage.
Technical skills: SQL, Advanced Excel
Good to have: R/Python, Dashboarding experience
Ability to translate structured and unstructured problems into analytical framework
Excellent analytical skills
Good communication & interpersonal skills
Ability to work in a fast-paced start-up environment, learn on the job and get things done.
Must Have Skills:
- Solid Knowledge on DWH, ETL and Big Data Concepts
- Excellent SQL Skills (With knowledge of SQL Analytics Functions)
- Working Experience on any ETL tool i.e. SSIS / Informatica
- Working Experience on any Azure or AWS Big Data Tools.
- Experience on Implementing Data Jobs (Batch / Real time Streaming)
- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies
Preferred Skills:
- Experience on Py-Spark / Spark SQL
- AWS Data Tools (AWS Glue, AWS Athena)
- Azure Data Tools (Azure Databricks, Azure Data Factory)
Other Skills:
- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search
- Knowledge on domain/function (across pricing, promotions and assortment).
- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),
- Knowledge on DQS and MDM.
Key Responsibilities:
- Independently work on ETL / DWH / Big data Projects
- Gather and process raw data at scale.
- Design and develop data applications using selected tools and frameworks as required and requested.
- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.
- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.
- Work closely with the engineering team to integrate your work into our production systems.
- Process unstructured data into a form suitable for analysis.
- Analyse processed data.
- Support business decisions with ad hoc analysis as needed.
- Monitoring data performance and modifying infrastructure as needed.
Responsibility: Smart Resource, having excellent communication skills
Job Description
-
Design, development and deployment of highly-available and fault-tolerant enterprise business software at scale.
-
Demonstrate tech expertise to go very deep or broad in solving classes of problems or creating broadly leverage-able solutions.
-
Execute large-scale projects - Provide technical leadership in architecting and building product solutions.
-
Collaborate across teams to deliver a result, from hardworking team members within your group, through smart technologists across lines of business.
-
Be a role model on acting with good judgment and responsibility, helping teams to commit and move forward.
-
Be a humble mentor and trusted advisor for both our talented team members and passionate leaders alike. Deal with differences in opinion in a mature and fair way.
-
Raise the bar by improving standard methodologies, producing best-in-class efficient solutions, code, documentation, testing, and monitoring.
Qualifications
• 15+ years of relevant engineering experience.
-
Proven record of building and productionizing highly reliable products at scale.
-
Experience with Java and Python
-
Experience with the Big Data technologie is a plus.
-
Ability to assess new technologies and make pragmatic choices that help guide us towards a long-term vision
-
Can collaborate well with several other engineering orgs to articulate requirements and system design
Additional Information
Professional Attributes:
• Team player!
• Great interpersonal skills, deep technical ability, and a portfolio of successful execution.
• Excellent written and verbal communication skills, including the ability to write detailed technical documents.
• Passionate about helping teams grow by inspiring and mentoring engineers.
About antuit.ai
Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.
Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.
The Role:
Antuit is looking for a Data / Sr. Data Scientist who has the knowledge and experience in developing machine learning algorithms, particularly in supply chain and forecasting domain with data science toolkits like Python.
In this role, you will design the approach, develop and test machine learning algorithms, implement the solution. The candidate should have excellent communication skills and be results driven with a customer centric approach to problem solving. Experience working in the demand forecasting or supply chain domain is a plus. This job also requires the ability to operate in a multi-geographic delivery environment and a good understanding of cross-cultural sensitivities.
Responsibilities:
Responsibilities includes, but are not limited to the following:
- Design, build, test, and implement predictive Machine Learning models.
- Collaborate with client to align business requirements with data science systems and process solutions that ensure client’s overall objectives are met.
- Create meaningful presentations and analysis that tell a “story” focused on insights, to communicate the results/ideas to key decision makers.
- Collaborate cross-functionally with domain experts to identify gaps and structural problems.
- Contribute to standard business processes and practices as part of a community of practise.
- Be the subject matter expert across multiple work streams and clients.
- Mentor and coach team members.
- Set a clear vision for the team members and working cohesively to attain it.
Qualifications and Skills:
Requirements
- Experience / Education:
- Master’s or Ph.D. in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Mathematics or other related
- 5+ years’ experience working in applied machine learning or relevant research experience for recent Ph.D. graduates.
- Highly technical:
- Skilled in machine learning, problem-solving, pattern recognition and predictive modeling with expertise in PySpark and Python.
- Understanding of data structures and data modeling.
- Effective communication and presentation skills
- Able to collaborate closely and effectively with teams.
- Experience in time series forecasting is preferred.
- Experience working in start-up type environment preferred.
- Experience in CPG and/or Retail preferred.
- Effective communication and presentation skills.
- Strong management track record.
- Strong inter-personal skills and leadership qualities.
Information Security Responsibilities
- Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
- Take part in Information Security training and act accordingly while handling information.
- Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).
EEOC
Antuit.ai is an at-will, equal opportunity employer. We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment