11+ SAS GIS Jobs in Delhi, NCR and Gurgaon | SAS GIS Job openings in Delhi, NCR and Gurgaon
Apply to 11+ SAS GIS Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest SAS GIS Job opportunities across top companies like Google, Amazon & Adobe.
A Delhi NCR based Applied AI & Consumer Tech company tackling one of the largest unsolved consumer internet problems of our time. We are a motley crew of smart, passionate and nice people who believe you can build a high performing company with a culture of respect aka a sports team with a heart aka a caring meritocracy.
Our illustrious angels include unicorn founders, serial entrepreneurs with exits, tech & consumer industry stalwarts and investment professionals/bankers.
We are hiring for our founding team (in Delhi NCR only, no remote) that will take the product from prototype to a landing! Opportunity for disproportionate non-linear impact, learning and wealth creation in a classic 0-1 with a Silicon Valley caliber founding team.
Key Responsibilities:
1. Data Strategy and Vision:
· Develop and drive the company's data analytics strategy, aligning it with overall business goals.
· Define the vision for data analytics, outlining clear objectives and key results (OKRs) to measure success.
2. Data Analysis and Interpretation:
· Oversee the analysis of complex datasets to extract valuable insights, trends, and patterns.
· Utilize statistical methods and data visualization techniques to present findings in a clear and compelling manner to both technical and non-technical stakeholders.
3. Data Infrastructure and Tools:
· Evaluate, select, and implement advanced analytics tools and platforms to enhance data processing and analysis capabilities.
· Collaborate with IT teams to ensure a robust and scalable data infrastructure, including data storage, retrieval, and security protocols.
4. Collaboration and Stakeholder Management:
· Collaborate cross-functionally with teams such as marketing, sales, and product development to identify opportunities for data-driven optimizations.
· Act as a liaison between technical and non-technical teams, ensuring effective communication of data insights and recommendations.
5. Performance Measurement:
· Establish key performance indicators (KPIs) and metrics to measure the impact of data analytics initiatives on business outcomes.
· Continuously assess and improve the accuracy and relevance of analytical models and methodologies.
Qualifications:
- Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or related field.
- Proven experience (5+ years) in data analytics, with a focus on leading analytics teams and driving strategic initiatives.
- Proficiency in data analysis tools such as Python, R, SQL, and advanced knowledge of data visualization tools.
- Strong understanding of statistical methods, machine learning algorithms, and predictive modelling techniques.
- Excellent communication skills, both written and verbal, to effectively convey complex findings to diverse audie
Who we are looking for
· A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
· Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
· Mentor and coach other team members
· Evaluate the performance of NLP models and ideate on how they can be improved
· Support internal and external NLP-facing APIs
· Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
· Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioural Skills
· Strong analytical and problem-solving capabilities.
· Proven ability to multi-task and deliver results within tight time frames
· Must have strong verbal and written communication skills
· Strong listening skills and eagerness to learn
· Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
· NLP
· Deep Learning
· Machine Learning
· Python
· Bert
Preferred Requirements
· Experience in Computer Vision is preferred
Role: Data Scientist
Industry Type: Banking
Department: Data Science & Analytics
Employment Type: Full Time, Permanent
Role Category: Data Science & Machine Learning
Location : Gurgaon
About the company:
The company is changing the way cataloging is done across the Globe. Our vision is to empower the smallest of sellers, situated in the farthest of corners, to create superior product images and videos, without the need for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly compose High-Quality "tuned-in" product visuals, instantly. The company has built the world’s leading image editing AI software, to capture and process beautiful product images for online selling. We are also fortunate and proud to be backed by the biggest names in the investment community including the likes of Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that there is an intelligent and efficient way of doing Digital Production than how the world operates currently.
Job Description :
- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of e
commerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
successful
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.
Job Requirements:
- Min 3+ years of work experience in Computer Vision with 5-10 years work experience overall
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
Responsibilities:
- Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
- Driving optimization, testing and tooling to improve quality.
- Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
- Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
- Following proper SDLC (Code review, sprint process).
- Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
- Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
- Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
- Supporting and contributing to development guidelines and standards for data ingestion.
- Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
- Designing and documenting the development & deployment flow.
Requirements:
- Experience in developing rest API services using one of the Scala frameworks.
- Ability to troubleshoot and optimize complex queries on the Spark platform
- Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
- Knowledge in modelling unstructured to structured data design.
- Experience in Big Data access and storage techniques.
- Experience in doing cost estimation based on the design and development.
- Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
- Highly organized, self-motivated, proactive, and ability to propose best design solutions.
- Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.
Data Scientist
Requirements
● B.Tech/Masters in Mathematics, Statistics, Computer Science or another
quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,
Predictive modeling, Clustering, Deep Learning stack, NLP
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark
etc.
● Experience with databases: MongoDB
Octro Inc. is looking for a Data Scientist who will support the product, leadership and marketing teams with insights gained from analyzing multiple sources of data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.
They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights.
They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Responsibilities :
- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
- Mine and analyze data from multiple databases to drive optimization and improvement of product development, marketing techniques and business strategies.
- Assess the effectiveness and accuracy of new data sources and data gathering techniques.
- Develop custom data models and algorithms to apply to data sets.
- Use predictive modelling to increase and optimize user experiences, revenue generation, ad targeting and other business outcomes.
- Develop various A/B testing frameworks and test model qualities.
- Coordinate with different functional teams to implement models and monitor outcomes.
- Develop processes and tools to monitor and analyze model performance and data accuracy.
Qualifications :
- Strong problem solving skills with an emphasis on product development and improvement.
- Advanced knowledge of SQL and its use in data gathering/cleaning.
- Experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets.
- Experience working with and creating data architectures.
- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
- Excellent written and verbal communication skills for coordinating across teams.
As a Data Science Lead, you will be working on creating industry first analytical and propensity models to
help discover the information hidden in vast amounts of data, and make smarter decisions to deliver
even better customer experience. Your primary focus will be in applying data mining techniques, doing
statistical analysis, and building high quality prediction systems integrated with our products.
➢ Working with business and leadership teams to gathering and analyse structured and unstructured data
➢ Data mining using state-of-the-art methods
➢ Enhancing data collection procedures to include information that is relevant for building analytic
systems
➢ Processing, cleansing, and verifying the integrity of data used for analysis
➢ Doing ad-hoc analysis and presenting results in a clear manner
➢ Creating automated anomaly detection systems and constant tracking of its performance
➢ Creation and evolution of an efficient BI pipeline into a multi-faceted pipeline to support various
modelling needs.
What we are looking for:
➢ 5-8 years of relevant experience, preferably in financial services industry.
➢ A bachelors / master’s degree in the field of Statistics, Mathematics, Computer Science or
Management from Tier 1 Institutes.
➢ Data warehousing experience will be a plus.
➢ Good conceptual understanding of statistics and probability.
➢ Experience in developing dashboards and reports using BI tools.