Hiring for Data Analyst - Assistant Manager - Chennai

About LatentView Analytics
Similar jobs
Who We Are:
HighLevel is an all-in-one, white-label marketing platform for agencies & consultants. As a profitable, disruptive, and fast-growing SaaS company, they caught the eye of many and recently raised $60M in funding. Headquartered in Dallas, Texas, many team members work for the company remotely from around the world while maintaining a strong company culture and work-life balance. Want to learn more? Check out our website: www.gohighlevel.com
Who You Are:
HighLevel is seeking a Senior Data Analyst to join our team who has a passion for turning data into actionable insights that will improve business operational efficiency and drive key business decisions. This role is 100% remote and you will report to the Manager of Data Analytics.
What You Will Do:
● Work to maintain the functionality and scalability of our core analytics stack (Mozart/Snowflake/Tableau), resolving ad hoc issues as needed.
● Identify, analyze, and interpret trends or patterns in complex data sets.
● Work independently to solve complex, ambiguous data problems with limited contextual information.
● Build, maintain, modify, and debug Tableau dashboards, from end to end. This will include data source setup (published data sources and custom SQL queries), data visualization, global filters, dashboard construction, user filtering, layout optimization, etc.
● Gather and document business requirements for new analytics reports and dashboards, as well as modifications to existing reports/dashboards.
● Identify and implement opportunities for improved/streamlined business operations across the organization.
● Analyze existing analytics products to identify and document key insights.
● Identify patterns, trends, anomalies, changes in key KPIs in core data sets; generate business insights and analyses to drive decision making at the stakeholder and executive level.
● Work collaboratively with analytics team to solve complex business problems
● Leverage existing analytics products to identify, summarize, and communicate key findings across the company.
What You Bring:
● Expert proficiency in Tableau
● Advanced SQL proficiency
● Advanced proficiency in Excel
● Python or R - helpful but not required
● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
● Excellent communication skills and proficiency in requirements-gathering
● Adept at query writing, report writing, and presenting findings
● BS in Mathematics, Economics, Computer Science, Information Management or Statistics
● A curious mind and a creative problem solver
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organization stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences, while providing awesome service to our clients and learning from one another along the way!
Job Description:
As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:
Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.
Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.
Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.
Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.
Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.
Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.
Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.
Skills and Qualifications:
Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.
Proficiency in designing and developing data pipelines and ETL processes.
Solid understanding of data modeling concepts and database design principles.
Familiarity with data integration and orchestration using Azure Data Factory.
Knowledge of data quality management and data governance practices.
Experience with performance tuning and optimization of data pipelines.
Strong problem-solving and troubleshooting skills related to data engineering.
Excellent collaboration and communication skills to work effectively in cross-functional teams.
Understanding of cloud computing principles and experience with Azure services.
We are looking for a Quantitative Developer who is passionate about financial markets and wants to join a scale-up with an excellent track record and growth potential in an innovative and fast-growing industry.
As a Quantitative Developer, you will be working on the infrastructure of our platform,as part of a very ambitious team.
At QCAlpha you have the freedom to choose the path that leads to the solution and get a lot of responsibility.
Responsibilities
• Design, develop, test, and deploy elegant software solutions for automated trading systems
• Building high-performance, bullet-proof components for both live trading and simulation
• Responsible for technology infrastructure systems development, which includes connectivity, maintenance, and internal automation processes
• Achieving trading system robustness through automated reconciliation and system-wide alerts
Requirements
• Bachelor’s degree or higher in computer science or other quantitative discipline
• Strong fundamental knowledge of OOP programming, algorithms, data structures and design patterns.
• Familiar with the following technology stacks: Linux shell, Python and its ecosystem, NumPy, Pandas, SQL, Redis, Docker or similar system
• Experience in python frameworks such as Django or Flask.
• Solid understanding of git, ci/cd.
• Excellent design, debugging and problem-solving skills.
• Proven versatility and ability to pick up new technologies and learn systems quickly.
• Trading Execution development and support experience is a plus.
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.
Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.
Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.
How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.
We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.
Purpose of the role:
* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making. * Handle nuances of Excel and Google Sheets API. * Pull data in and manage it growth, freshness and correctness. * Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads. * Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.
Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python. * Good Knowledge of Data Warehousing, Data Architecture. * Experience with Data Transformations and ETL; * Experience with API tools and more closed systems like Excel, Google Sheets etc. * Experience AWS Cloud Platform and Lambda * Experience with distributed data processing tools. * Experiences with container-based deployments on cloud.
Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
1. Use Python Scrapy to crawl the website
2. Work on dynamic websites and solve crawling challenges
3. Work in a fast-paced startup environment
Lead Machine Learning Engineer
About IDfy
IDfy is ranked amongst the World's Top 100 Regulatory Technology companies for the last two years. IDfy's AI-powered technology solutions help real people unlock real opportunities. We create the confidence required for people and businesses to engage with each other in the digital world. If you have used any major payment wallets, digitally opened a bank account , have used a self-drive car, have played a real-money online game, or hosted people through AirBnB, it's quite likely that your identity has been verified through IDfy at some point.
About the team
- The machine learning team is a closely knit team responsible for building models and services that support key workflows for IDfy.
- Our models are critical for these workflows and as such are expected to perform accurately and with low latency. We use a mix of conventional and hand-crafted deep learning models.
- The team comes from diverse backgrounds and experience. We respect opinions and believe in honest, open communication.
- We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.
About the role
In this role you will:
- Work on all aspects of a production machine learning platform: acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
- Work on performance tuning of models
- From time to time work on support and debugging of these production systems
- Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
- Building workflows for training and production systems
- Contribute to documentation
While the emphasis will be on researching, building and deploying models into production, you will be expected to contribute to aspects mentioned above.
About you
- You are a seasoned machine learning engineer (or data scientist). Our ideal candidate is someone with 8+ years of experience in production machine learning.
Must Haves
- You should be experienced in framing and solving complex problems with the application of machine learning or deep learning models.
- Deep expertise in computer vision or NLP with the experience of putting it into production at scale.
- You have experienced that and understand that modelling is only a small part of building and delivering AI solutions and know what it takes to keep a high-performance system up and running.
- Managing a large scale production ML system for at least a couple of years
- Optimization and tuning of models for deployment at scale
- Monitoring and debugging of production ML systems
- An enthusiasm and drive to learn, assimilate and disseminate the state of the art research. A lot of what we are building will require innovative approaches using newly researched models and applications.
- Past experience of mentoring junior colleagues
- Knowledge of and experience in ML Ops and tooling for efficient machine learning processes
Good to Have
- Our stack also includes languages like Go and Elixir. We would love it if you know any of these or take interest in functional programming.
- We use Docker and Kubernetes for deploying our services, so an understanding of this would be useful to have.
- Experience in using any other platform, frameworks, tools.
Other things to keep in mind
- Our goal is to help a significant part of the world’s population unlock real opportunities. This is an opportunity to make a positive impact here, and we hope you like it as much as we do.
Life At IDfy
People at IDfy care about creating value. We take pride in the strong collaborative culture that we have built, and our love for solving challenging problems. Life at IDfy is not always what you’d expect at a tech start-up that’s growing exponentially every quarter. There’s still time and space for balance.
We host regular talks, events and performances around Life, Art, Sports, and Technology; continuously sparking creative neurons in our people to keep their intellectual juices flowing. There’s never a dull day at IDfy. The office environment is casual and it goes beyond just the dress code. We have no conventional hierarchies and believe in an open-door policy where everyone is approachable.
- 6+ months of proven experience as a Data Scientist or Data Analyst
- Understanding of machine-learning and operations research
- Extensive knowledge of R, SQL and Excel
- Analytical mind and business acumen
- Strong Statistical understanding
- Problem-solving aptitude
- BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
● Good communication and collaboration skills with 4-7 years of experience.
● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Good understanding of RDBMS
● Experience in building Data pipelines and processing large datasets .
● Knowledge of building Web Scraping and data mining is a plus.
● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra ( data stores )
would be a plus.
● Expert in Python programming
Role and responsibilities
● Inclined towards working in a start-up environment.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Design and Build robust and scalable data engineering solutions for structured and unstructured data for
delivering business insights, reporting and analytics.
● Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall
system performance.
● Build robust API ’s that powers our delivery points (Dashboards, Visualizations and other integrations).
Job Description
Niki is an artificially intelligent ordering application (http://niki.ai/app" target="_blank">niki.ai/app). Our founding team is from IIT Kharagpur, and we are looking for a Natural Language Processing Engineer to join our engineering team.
The ideal candidate will have industry experience solving language-related problems using statistical methods on vast quantities of data available from Indian mobile consumers and elsewhere.
Major responsibilities would be:
1. Create language models from text data. These language models draw heavily from statistical, deep learning as well as rule based research in recent times around building taggers, parsers, knowledge graph based dictionaries etc.
2. Develop highly scalable classifiers and tools leveraging machine learning, data regression, and rules based models
3. Work closely with product teams to implement algorithms that power user and developer-facing products
We work mostly in Java and Python and object oriented concepts are a must to fit in the team. Basic eligibility criteria are:
1. Graduate/Post-Graduate/M.S./
2. Industry experience of min 5 years.
3. Strong background in Natural Language Processing and Machine Learning
4. Have some experience in leading a team big or small.
5. Experience with Hadoop/Hbase/Pig or MaprReduce/Sawzall/Bigtable is a plus
Competitive Compensation.
What We're Building
We are building an automated messaging platform to simplify ordering experience for consumers. We have launched the Android App: http://niki.ai/app" target="_blank">niki.ai/app . In the current avatar, Niki can process mobile phone recharge and book cabs for the consumers. It assists in finding the right recharge plans across topup, 2g, 3g and makes the transaction. In cab booking, it helps in end to end booking along with tracking and cancellation within the App. You may also compare to get the nearest or the cheapest cab among available ones.
Being an instant messaging App, it works seamlessly on 2G / 3G / Wifi and is light weight around 3.6 MB. You may check out using: https://niki.ai/" target="_blank">niki.ai app

