Should be able to use the transformations components to transform the data
Should possess knowledge on incremental load, full load etc.
Should Design, build and deploy effective packages
Should be able to schedule these packages through task schedulers
Implement stored procedures and effectively query a database
Translate requirements from the business and analyst into technical code
Identify and test for bugs and bottlenecks in the ETL solution
Ensure the best possible performance and quality in the packages
Provide support and fix issues in the packages
Writes advanced SQL including some query tuning
Experience in the identification of data quality
Some database design experience is helpful
Experience designing and building complete ETL/SSIS processes moving and transforming data for
ODS, Staging, and Data Warehousing
About CRG Solutions Pvt Ltd
Similar jobs
- Own the product analytics of bidgely’s end user-facing products, measure and identify areas of improvement through data
- Liaise with Product Managers and Business Leaders to understand the product issues, priorities and hence support them through relevant product analytics
- Own the automation of product analytics through good SQL knowledge
- Develop early warning metrics for production and highlight issues and breakdowns for resolution
- Resolve client escalations and concerns regarding key business metrics
- Define and own execution
- Own the Energy Efficiency program designs, dashboard development, and monitoring of existing Energy efficiency program
- Deliver data-backed analysis and statistically proven solutions
- Research and implement best practices
- Mentor team of analysts
Qualifications and Education Requirements
- B.Tech from a premier institute with 5+ years analytics experience or Full-time MBA from a premier b-school with 3+ years of experience in analytics/business or product analytics
- Bachelor's degree in Business, Computer Science, Computer Information Systems, Engineering, Mathematics, or other business/analytical disciplines
Skills needed to excel
- Proven analytical and quantitative skills and an ability to use data and metrics to back up assumptions, develop business cases, and complete root cause
analyses - Excellent understanding of retention, churn, and acquisition of user base
- Ability to employ statistics and anomaly detection techniques for data-driven
analytics - Ability to put yourself in the shoes of the end customer and understand what
“product excellence” means - Ability to rethink existing products and use analytics to identify new features and product improvements.
- Ability to rethink existing processes and design new processes for more effective analyses
- Strong SQL knowledge, working experience with Looker and Tableau a great plus
- Strong commitment to quality visible in the thoroughness of analysis and techniques employed
- Strong project management and leadership skills
- Excellent communication (oral and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams
- Ability to coach and mentor analysts on technical and analytical skills
- Good knowledge of statistics, basic machine learning, and AB Testing is
preferable - Experience as a Growth hacker and/or in Product analytics is a big plus
Who we are:
Stanza Living is India’s largest and fastest growing tech-enabled, managed accommodation company that delivers a hospitality-led living experience to migrant students and young working professionals across India.
We have a full-stack business model that focuses on design, development and delivery of daily living solutions tailored to the young consumers’ lifestyle. From smartly-planned
residences, host of amenities and services for hassle-free living to exclusive community engagement programmes – everything is seamlessly integrated through technology to ensure the highest consumer delight.
Today, we are:
• India’s largest managed accommodation company with over 50,000 beds under management across 24+ cities
• Most capitalized player in the managed accommodation space, backed by global marquee investors – Falcon Edge, Equity International, Sequoia Capital, Matrix Partners, Accel Partners
• Recognized as the Best Real Estate Tech company across the Globe in 2020 by leading analysis agency, Tracxn
• LinkedIn Top Startup to Work for - 2022
The opportunity: Job Responsibilities:
• Perform data analysis on large volumes of data to identify trends and/or data processing rules
• Team player of core analytics team.
• Responsible for weekly and monthly Sales/Marketing Reports on Gross and Net basis and other adhoc reports.
• Generating reports on daily basis at all stages ·
• Analyze the data to come out with insights on what leads to better conversions, student preferences, role on various
investments and channels, optimizing the spend etc.
• Prepare reports and dashboard for various business functions to keep track of important business metrics.
• Elicit and document requirements at various levels including Business, Logical and Physical/Technical
Skill Sets
• Good hands-on Advanced Excel & SQL.
• Has extensively worked on live Dashboards, reporting, data manipulation and making flat tables in SQL
• Knowledge in Python/R
• Strong analytical skills and ability to interpret data
• Natural curiosity and self-drive to understand the broader business in order to provide the appropriate reporting support
• Extremely high ownership, self-starter and work in a constantly-changing and fast-growing environment
• Establish collaborative and trusting relationships with the business’s key internal leaders and stakeholders in order to ensure that there is a free flow of ideas and information across the business
• First principle thinking and strong problem solving
What Can You Expect:
• A phenomenal work environment, with extremely high ownership and growth opportunities
• Opportunity to shape a potential unicorn
• Quick iterations and deployments - fail-fast attitude
• Opportunity to work on cutting-edge technologies
• Access to a world-class mentorship network
Experience: 6-9 Years
Location: Pan India
Job Description
Assist in the establishment of internal and controls related to software asset management, governance and compliance
Handled Cloud – AWS, Amazon, GCP & Billing
Participated in license compliance audit.
Tracking/management of software license governance and compliance in accordance with enterprise policy, process, procedures and controls by internal staff and external service providers
- Analyze and organize raw data
- Build data systems and pipelines
- Evaluate business needs and objectives
- Interpret trends and patterns
- Conduct complex data analysis and report on results
- Build algorithms and prototypes
- Combine raw information from different sources
- Explore ways to enhance data quality and reliability
- Identify opportunities for data acquisition
- Should have experience in Python, Django Micro Service Senior developer with Financial Services/Investment Banking background.
- Develop analytical tools and programs
- Collaborate with data scientists and architects on several projects
- Should have 5+ years of experience as a data engineer or in a similar role
- Technical expertise with data models, data mining, and segmentation techniques
- Should have experience programming languages such as Python
- Hands-on experience with SQL database design
- Great numerical and analytical skills
- Degree in Computer Science, IT, or similar field; a Master’s is a plus
- Data engineering certification (e.g. IBM Certified Data Engineer) is a plus
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
-
Azure Synapse or Azure SQL data warehouse
-
Spark on Azure is available in HD insights and data bricks
- Experience with Azure Analysis Services
- Experience in Power BI
- Experience with third-party solutions like Attunity/Stream sets, Informatica
- Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
- Capacity Planning and Performance Tuning on Azure Stack and Spark.
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Author data services using a variety of programming languages
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centres and Azure regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Work in an Agile environment with Scrum teams.
- Ensure data quality and help in achieving data governance.
Basic Qualifications
- 2+ years of experience in a Data Engineer role
- Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases
- Experience with data pipeline and workflow management tools
- Experience with Azure cloud services: ADLS, ADF, ADLA, AAS
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
- Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
Skills Required:
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5. Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
3+ years of experience in deployment, monitoring, tuning, and administration of high concurrency MySQL production databases.
- Solid understanding of writing optimized SQL queries on MySQL databases
- Understanding of AWS, VPC, networking, security groups, IAM, and roles.
- Expertise in scripting in Python or Shell/Powershell
- Must have experience in large scale data migrations
- Excellent communication skills.