· Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy· Prior experience of statistical modelling techniques, AI/ML models etc. will be value add· Working knowledge of reporting packages (Business Objects, QLIK Power BI etc.), ETL frameworks will be an advantage.· Knowledge of statistics and experience using statistical packages for analysing datasets (MS Excel, SPSS, SAS etc.)· Experience on Python, R and other scripting languages is desirable, but not must
We’re looking for an experienced Solution Architect to lead our data analytics and visualization team in Gurugram to help generate valuable insights for our customers as well as the product team. This role will be a very hands-on role and you will be responsible for all aspects of the architecture, security and scalability of the analytics and visualization application. Key Responsibilities: Responsible for the application architecture, performance, security, scalability and availability. Write/Review Code every day in addition to pairing with team members on functional and nonfunctional requirements and spread design philosophy, goals and improvements to code quality across the team Translate objectives into iterative MVPs, evaluate and then refactor into highly scalable, highly available, reliable, secure and fault-tolerant systems. Building and managing automated build/test/deployment environments Identify and confirm technical design risks, and develop mitigating approaches, judge the tradeoffs with technology and feasibility and makes choices that fit the constraints of the project. Demonstrates comprehensive decision-making authority over technical issues, project policies, standards, and strategies Deep dive into the data to help our customers understand how their users are using the app and how they can improve the adoption. Understanding how users engage with the product, by analyzing users' digital footprints step-by-step to see what leads them to engage, return, or churn. The role would enable product owners, designers, and developers to use the right data to guide their decisions. Provide data driven insights and deliver recommendations that address opportunities for product improvements Help improve accuracy and efficiency of data capture strategy A/B Test: Analyze, consolidate, and present test results. Uncover friction points to continuously iterate and improve products or features Desired Skills and Experience: 7+ years overall experience, with minimum 2+ years in data analysis/ visualization, with hands-on participation. Experience in at least two of NodeJS/ Python/ Java and a willingness to learn others. Data modelling experience in both Relational and NoSQL databases. Experience working with SQL and No-SQL databases, data visualization technologies and cloud infrastructure platforms Shown experience with large-scale data analysis, data-mining, business intelligence systems, and other advanced business reporting tools, including SQL or similar tools. Knowledge of data conversion strategy, capturing data, creating source to target definitions for ETL/ data flow process, data quality, and database management Develop and implement databases, data collection systems, data analytics and other strategies Identify, analyze, and interpret trends or patterns in complex data sets. Work with large scale data sets: extract data from sources across multiple platforms and synthesize into dashboards or insightful information Proven deep technical expertise and hands-on involvement in designing, architecting and executing large scale cloud solutions for data analytics and other complementary services like compute, workflow and serverless. Strong experience in Agile methodology and ability to deliver in a global team environment with members working remotely in various time zones Communication Skills must be excellent across roles @Simpplr.
should have experience in Data engineering with knowledge of SQL , ETL , SPARK, Data pipelineShould have worked in product based company
Responsible for planning, connecting, designing, scheduling, and deploying data warehouse systems. Develops, monitors, and maintains ETL processes, reporting applications, and data warehouse design. Role and Responsibility · Plan, create, coordinate, and deploy data warehouses. · Design end user interface. · Create best practices for data loading and extraction. · Develop data architecture, data modeling, and ETFL mapping solutions within structured data warehouse environment. · Develop reporting applications and data warehouse consistency. · Facilitate requirements gathering using expert listening skills and develop unique simple solutions to meet the immediate and long-term needs of business customers. · Supervise design throughout implementation process. · Design and build cubes while performing custom scripts. · Develop and implement ETL routines according to the DWH design and architecture. · Support the development and validation required through the lifecycle of the DWH and Business Intelligence systems, maintain user connectivity, and provide adequate security for data warehouse. · Monitor the DWH and BI systems performance and integrity provide corrective and preventative maintenance as required. · Manage multiple projects at once. DESIRABLE SKILL SET · Experience with technologies such as MySQL, MongoDB, SQL Server 2008, as well as with newer ones like SSIS and stored procedures · Exceptional experience developing codes, testing for quality assurance, administering RDBMS, and monitoring of database · High proficiency in dimensional modeling techniques and their applications · Strong analytical, consultative, and communication skills; as well as the ability to make good judgment and work with both technical and business personnel · Several years working experience with Tableau, MicroStrategy, Information Builders, and other reporting and analytical tools · Working knowledge of SAS and R code used in data processing and modeling tasks · Strong experience with Hadoop, Impala, Pig, Hive, YARN, and other “big data” technologies such as AWS Redshift or Google Big Data
Job Responsibilities : - Developing new data pipelines and ETL jobs for processing millions of records and it should be scalable with growth. Pipelines should be optimised to handle both real time data, batch update data and historical data. Establish scalable, efficient, automated processes for complex, large scale data analysis. Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques. Participate in data pipelines health monitoring and performance optimisations as well as quality documentation. Interact with end users/clients and translate business language into technical requirements. Acts independently to expose and resolve problems.Job Requirements :- 2+ years experience working in software development & data pipeline development for enterprise analytics. 2+ years of working with Python with exposure to various warehousing tools In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc. Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must. Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement. Strong verbal and written communication skills with other developers and business client. Knowledge of Logistics and/or Transportation Domain is a plus. Hands-on with traditional databases and ERP systems like Sybase and People-soft.
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques