Purpose of Job:
We are looking for someone who can manage the daily activities of the
team responsible for the design, implementation, maintenance and
support of data warehouse systems and related data marts. Oversees
data design and the creation of database architecture.
Job Responsibilities:
7+ years of industry experience and 2+ years of experience
managing a team
Exceptional knowledge in designing modern Databases such as
MySQL, Postgres, Redshift, Snowflake, Hive or Presto. Has good experience working in agile based projects. Has experience in understanding & converting BRD’s into
technical designs. Work with management to provide effort, estimation and
timelines. Work closely with IT, Business teams and other internal
stakeholders to resolve business queries within the defined SLA. Exceptional knowledge in SQL and in designing tables, databases, partitions and query optimization. Has good experience in designing ETL solutions in tools such as
Talend/AWS Glue/EMR. Have at least 3 years of experience working on AWS Cloud
Hands-on in monitoring ETL jobs and performing health check to
ensure data quality
Have good exposure with AWS Services such as RDS, Aurora, S3, Lambda, Glue, EMR, Step Functions etc. Design quality assurance tests for ensuring data integrity and
quality
Good to have Python and big data knowledge
Qualifications:
At least a bachelor’s degree in Science, Engineering, Applied
Mathematics. Masters in Computer Science or related field in preferred. Other Requirements: Leadership skills, excellent communication skills, ability to own tasks
About Fintech Company
Similar jobs
● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.
● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypothesis through theoretical and empirical approaches.
● Productize proven or working models into production quality code.
● Collaborate with product management, marketing and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions
● Stay current with latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision makers.
● File patents for innovative solutions that add to company's IP portfolio
Requirements
● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.
● BS/MS/PhD in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes (only IITs / IISc / BITS / Top NITs or top US university
should apply)
● Experience in productizing models to code in a fast-paced start-up
environment.
● Expertise in Python programming language and fluency in analytical tools
such as Matlab, R, Weka etc.
● Strong intuition for data and Keen aptitude on large scale data analysis
● Strong communication and collaboration skills.
About Slintel (a 6sense company) :
Slintel, a 6sense company, the leader in capturing technographics-powered buying intent, helps companies uncover the 3% of active buyers in their target market. Slintel evaluates over 100 billion data points and analyzes factors such as buyer journeys, technology adoption patterns, and other digital footprints to deliver market & sales intelligence.
Slintel's customers have access to the buying patterns and contact information of more than 17 million companies and 250 million decision makers across the world.
Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are funded by top tier VCs, and going after a billion dollar opportunity. At Slintel, we are building a sales development automation platform that can significantly improve outcomes for sales teams, while reducing the number of hours spent on research and outreach.
We are a big data company and perform deep analysis on technology buying patterns, buyer pain points to understand where buyers are in their journey. Over 100 billion data points are analyzed every week to derive recommendations on where companies should focus their marketing and sales efforts on. Third party intent signals are then clubbed with first party data from CRMs to derive meaningful recommendations on whom to target on any given day.
6sense is headquartered in San Francisco, CA and has 8 office locations across 4 countries.
6sense, an account engagement platform, secured $200 million in a Series E funding round, bringing its total valuation to $5.2 billion 10 months after its $125 million Series D round. The investment was co-led by Blue Owl and MSD Partners, among other new and existing investors.
Linkedin (Slintel) : https://www.linkedin.com/company/slintel/">https://www.linkedin.com/company/slintel/
Industry : Software Development
Company size : 51-200 employees (189 on LinkedIn)
Headquarters : Mountain View, California
Founded : 2016
Specialties : Technographics, lead intelligence, Sales Intelligence, Company Data, and Lead Data.
Website (Slintel) : https://www.slintel.com/slintel">https://www.slintel.com/slintel
Linkedin (6sense) : https://www.linkedin.com/company/6sense/">https://www.linkedin.com/company/6sense/
Industry : Software Development
Company size : 501-1,000 employees (937 on LinkedIn)
Headquarters : San Francisco, California
Founded : 2013
Specialties : Predictive intelligence, Predictive marketing, B2B marketing, and Predictive sales
Website (6sense) : https://6sense.com/">https://6sense.com/
Acquisition News :
https://inc42.com/buzz/us-based-based-6sense-acquires-b2b-buyer-intelligence-startup-slintel/
Funding Details & News :
Slintel funding : https://www.crunchbase.com/organization/slintel">https://www.crunchbase.com/organization/slintel
6sense funding : https://www.crunchbase.com/organization/6sense">https://www.crunchbase.com/organization/6sense
https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round">https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round
https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round">https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round
https://xipometer.com/en/company/6sense">https://xipometer.com/en/company/6sense
Slintel & 6sense Customers :
https://www.featuredcustomers.com/vendor/slintel/customers
https://www.featuredcustomers.com/vendor/6sense/customers">https://www.featuredcustomers.com/vendor/6sense/customers
About the job
Responsibilities
- Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse
- Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs
- Assemble large, complex data sets from third-party vendors to meet business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, and AWS technology
- Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems
Requirements
- 3+ years of experience in a Data Engineer role
- Proficiency in Linux
- Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena
- Must have experience with Python/ Scala
- Must have experience with Big Data technologies like Apache Spark
- Must have experience with Apache Airflow
- Experience with data pipeline and ETL tools like AWS Glue
- Experience working with AWS cloud services: EC2 S3 RDS, Redshift and other Data solutions eg. Databricks, Snowflake
Desired Skills and Experience
Python, SQL, Scala, Spark, ETL
Hi,
We are hiring for Data Scientist for Bangalore.
Req Skills:
- NLP
- ML programming
- Spark
- Model Deployment
- Experience processing unstructured data and building NLP models
- Experience with big data tools pyspark
- Pipeline orchestration using Airflow and model deployment experience is preferred
Work at the intersection of Energy, Weather & Climate Sciences and Artificial Intelligence.
Responsibilities:
- Manage all real-time and batch ETL pipelines with complete ownership
- Develop systems for integration, storage and accessibility of multiple data streams from SCADA, IoT devices, Satellite Imaging, Weather Simulation Outputs, etc.
- Support team members on product development and mentor junior team members
Expectations:
- Ability to work on broad objectives and move from vision to business requirements to technical solutions
- Willingness to assume ownership of effort and outcomes
- High levels of integrity and transparency
Requirements:
- Strong analytical and data driven approach to problem solving
- Proficiency in python programming and working with numerical and/or imaging data
- Experience working on LINUX environments
- Industry experience in building and maintaining ETL pipelines
This person MUST have:
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including practical experience with and theoretical understanding of algorithms for classification, regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world problems involving vision tasks such as object detection, Object tracking, instance segmentation, activity detection, depth estimation, optical flow, multi-view geometry, domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow, Darknet(YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet,BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix, and CycleGAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
- Strong communication skills.
Experience:
- Min 2 year experience
- Startup experience is a must.
Location:
- Remote developer
Timings:
- 40 hours a week but with 4 hours a day overlapping with the client timezone. Typically clients are in the California PST Timezone.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
About us
Skit (previously known as http://vernacular.ai/" target="_blank">Vernacular.ai) is an AI-first SaaS voice automation company. Its suite of speech and language solutions enable enterprises to automate their contact centre operations. With over 10 million hours of training data, its product - Vernacular Intelligent Voice Assistant (VIVA) can currently respond in 16+ languages, covering over 160+ dialects and replicating human-like conversations.
Skit currently serves a variety of enterprise clients across diverse sectors such as BFSI, F&B, Hospitality, Consumer Electronics and Travel & Tourism, including prominent clients like Axis Bank, Hathway, Porter and Barbeque Nation. It has been featured as one of the top-notch start-ups in the Cisco Launchpad’s Cohort 6 and is a part of the World Economic Forum’s Global Innovators Community. It has has also been listed in Forbes 30 Under 30 Asia start-ups 2021 for its remarkable industry innovation.
We are looking for ML Research Engineers to work on the following problems:
- Spoken Language Understanding and Dialog Management.
- Language semantics, parsing, and modeling across multiple languages.
- Speech Recognition, Speech Analytics and Voice Processing across multiple languages.
- Response Generation and Speech Synthesis.
- Active Learning, Monitoring and Observability mechanisms for deployments.
Responsibilities
- Design, build and evaluate Machine Learning solutions.
- Perform experiments and statistical analyses to draw conclusions and take modeling decisions.
- Study, implement and extend state of the art systems.
- Take part in regular research reviews and discussions.
- Build, maintain and extend our open source solutions in the domain.
- Write well-crafted programs at all levels of the system. This includes the data pipelines, experiment prototypes, fast and scalable deployment models, and evaluation, visualization and monitoring systems.
Requirements
- Practical Machine Learning experience as demonstrated by earlier works.
- Knowledge of and ability to use tools from theoretical and practical aspects of computer science. This includes, but is not limited to, probability, statistics, learning theory, algorithms, software architecture, programming languages, etc.
- Good programming skills and ability to work with programs at all levels of a finished Machine Learning product. We prefer language agnosticism since that exemplifies this point.
- Git portfolios and blogs are helpful as they let us better evaluate your work.
- What information we collect during our application and recruitment process and why we collect it;
- How we use that information; and
- How to access and update that information.
This policy covers the information you share with Skit (Cyllid Technologies Pvt. Ltd.) during the application or recruitment process including:
- Your name, address, email address, telephone number and other contact information;
- Your resume or CV, cover letter, previous and/or relevant work experience or other experience, education, transcripts, or other information you provide to us in support of an application and/or the application and recruitment process;
- Information from interviews and phone-screenings you may have, if any;
- Details of the type of employment you are or may be looking for, current and/or desired salary and other terms relating to compensation and benefits packages, willingness to relocate, or other job preferences;
- Details of how you heard about the position you are applying for;
- Reference information and/or information received from background checks (where applicable), including information provided by third parties;
- Information about your educational and professional background from publicly available sources, including online, that we believe is relevant to your application or a potential future application (e.g. your LinkedIn profile); and/or
- Information related to any assessment you may take as part of the interview screening process.
Your information will be used by Skit for the purposes of carrying out its application and recruitment process which includes:
- Assessing your skills, qualifications and interests against our career opportunities;
- Verifying your information and carrying out reference checks and/or conducting background checks (where applicable) if you are offered a job;
- Communications with you about the recruitment process and/or your application(s), including, in appropriate cases, informing you of other potential career opportunities at Skit;
- Creating and/or submitting reports as required under any local laws and/or regulations, where applicable;
- Making improvements to Skit's application and/or recruitment process including improving diversity in recruitment practices;
- Proactively conducting research about your educational and professional background and skills and contacting you if we think you would be suitable for a role with us.
-
Responsibilities
- Responsible for implementation and ongoing administration of Hadoop
infrastructure.
- Aligning with the systems engineering team to propose and deploy new
hardware and software environments required for Hadoop and to expand existing
environments.
- Working with data delivery teams to setup new Hadoop users. This job includes
setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig
and MapReduce access for the new users.
- Cluster maintenance as well as creation and removal of nodes using tools like
Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools
- Performance tuning of Hadoop clusters and Hadoop MapReduce routines
- Screen Hadoop cluster job performances and capacity planning
- Monitor Hadoop cluster connectivity and security
- Manage and review Hadoop log files.
- File system management and monitoring.
- Diligently teaming with the infrastructure, network, database, application and
business intelligence teams to guarantee high data quality and availability
- Collaboration with application teams to install operating system and Hadoop
updates, patches, version upgrades when required.
READ MORE OF THE JOB DESCRIPTION
Qualifications
Qualifications
- Bachelors Degree in Information Technology, Computer Science or other
relevant fields
- General operational expertise such as good troubleshooting skills,
understanding of systems capacity, bottlenecks, basics of memory, CPU, OS,
storage, and networks.
- Hadoop skills like HBase, Hive, Pig, Mahout
- Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs,
monitor critical parts of the cluster, configure name node high availability, schedule
and configure it and take backups.
- Good knowledge of Linux as Hadoop runs on Linux.
- Familiarity with open source configuration management and deployment tools
such as Puppet or Chef and Linux scripting.
Nice to Have
- Knowledge of Troubleshooting Core Java Applications is a plus.
About GlowRoad:
GlowRoad is building India's most profitable social e-commerce platform where resellers share
the catalog of products through their network on Facebook, Whatsapp, Instagram, etc and
convert them to sales. GlowRoad is on a mission to create micro-entrepreneurs (resellers) who can set up their web-store, market their products and track all transactions through its platform.
GlowRoad app has ~15M downloads and 1- million + MAU's.-
GlowRoad has been funded by global VCs like Accel Partners, CDH, KIP and Vertex Ventures and recently raised series C Funding. We are scaling our operations across India.-
GlowRoad is looking for team members passionate about building platforms for next billion
users and reimagining e-commerce for mobile-first users. A great environment, a fun, open,
energetic and creative environment. Approachable leadership, filled with passionate people, Open communication and provides high growth for employees.
Role:
● Gather, process/analyze and report business data across departments
● Report key business data/metrics on a regular basis (daily, weekly and monthly
as relevant)
● Structure concise reports to share with management
● Work closely with Senior Analysts to create data pipelines for Analytical
Databases for Category, Operations, Marketing, Support teams.
● Assist Senior Analysts in projects by learning new reporting tools like Power BI
and advanced analytics with R
Basic Qualifications
● Engineering Graduate
● 6- 24 months of Hands on experience with SQL, Excel, Google Spreadsheets
● Experience in creating MIS/Dashboards in Excel/Google Spreadsheets
● Strong in Mathematics
● Ability to take full ownership in terms of timeline and data sanity with respect to
reports
● Basic Verbal and Written English Communication
About Us |
|
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
|
- Insurance P&C and Specialty domain experience a plus
- Experience in a cloud-based architecture preferred, such as Databricks, Azure Data Lake, Azure Data Factory, etc.
- Strong understanding of ETL fundamentals and solutions. Should be proficient in writing advanced / complex SQL, expertise in performance tuning and optimization of SQL queries required.
- Strong experience in Python/PySpark and Spark SQL
- Experience in troubleshooting data issues, analyzing end to end data pipelines, and working with various teams in resolving issues and solving complex problems.
- Strong experience developing Spark applications using PySpark and SQL for data extraction, transformation, and aggregation from multiple formats for analyzing & transforming the data to uncover insights and actionable intelligence for internal and external use