Cutshort logo
Blenheim Chalcot IT Services India Pvt Ltd logo
Software Architect/CTO
Blenheim Chalcot IT Services India Pvt Ltd
Blenheim Chalcot IT Services India Pvt Ltd's logo

Software Architect/CTO

at Blenheim Chalcot IT Services India Pvt Ltd

5 - 8 yrs
₹25L - ₹30L / yr
Mumbai
Skills
SQL Azure
ADF
Azure data factory
Azure Datalake
Azure Databricks
ETL
PowerBI
Apache Synapse
Data Warehouse (DWH)
API
SFTP
JSON
skill iconJava
skill iconPython
skill iconC#
skill iconJavascript
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
As a hands-on Data Architect, you will be part of a team responsible for building enterprise-grade
Data Warehouse and Analytics solutions that aggregate data across diverse sources and data types
including text, video and audio through to live stream and IoT in an agile project delivery
environment with a focus on DataOps and Data Observability. You will work with Azure SQL
Databases, Synapse Analytics, Azure Data Factory, Azure Datalake Gen2, Azure Databricks, Azure
Machine Learning, Azure Service Bus, Azure Serverless (LogicApps, FunctionApps), Azure Data
Catalogue and Purview among other tools, gaining opportunities to learn some of the most
advanced and innovative techniques in the cloud data space.
You will be building Power BI based analytics solutions to provide actionable insights into customer
data, and to measure operational efficiencies and other key business performance metrics.
You will be involved in the development, build, deployment, and testing of customer solutions, with
responsibility for the design, implementation and documentation of the technical aspects, including
integration to ensure the solution meets customer requirements. You will be working closely with
fellow architects, engineers, analysts, and team leads and project managers to plan, build and roll
out data driven solutions
Expertise:
Proven expertise in developing data solutions with Azure SQL Server and Azure SQL Data Warehouse (now
Synapse Analytics)
Demonstrated expertise of data modelling and data warehouse methodologies and best practices.
Ability to write efficient data pipelines for ETL using Azure Data Factory or equivalent tools.
Integration of data feeds utilising both structured (ex XML/JSON) and flat schemas (ex CSV,TXT,XLSX)
across a wide range of electronic delivery mechanisms (API/SFTP/etc )
Azure DevOps knowledge essential for CI/CD of data ingestion pipelines and integrations.
Experience with object-oriented/object function scripting languages such as Python, Java, JavaScript, C#,
Scala, etc is required.
Expertise in creating technical and Architecture documentation (ex: HLD/LLD) is a must.
Proven ability to rapidly analyse and design solution architecture in client proposals is an added advantage.
Expertise with big data tools: Hadoop, Spark, Kafka, NoSQL databases, stream-processing systems is a plus.
Essential Experience:
5 or more years of hands-on experience in a data architect role with the development of ingestion,
integration, data auditing, reporting, and testing with Azure SQL tech stack.
full data and analytics project lifecycle experience (including costing and cost management of data
solutions) in Azure PaaS environment is essential.
Microsoft Azure and Data Certifications, at least fundamentals, are a must.
Experience using agile development methodologies, version control systems and repositories is a must.
A good, applied understanding of the end-to-end data process development life cycle.
A good working knowledge of data warehouse methodology using Azure SQL.
A good working knowledge of the Azure platform, it’s components, and the ability to leverage it’s
resources to implement solutions is a must.
Experience working in the Public sector or in an organisation servicing Public sector is a must,
Ability to work to demanding deadlines, keep momentum and deal with conflicting priorities in an
environment undergoing a programme of transformational change.
The ability to contribute and adhere to standards, have excellent attention to detail and be strongly driven
by quality.
Desirables:
Experience with AWS or google cloud platforms will be an added advantage.
Experience with Azure ML services will be an added advantage Personal Attributes
Articulated and clear in communications to mixed audiences- in writing, through presentations and one-toone.
Ability to present highly technical concepts and ideas in a business-friendly language.
Ability to effectively prioritise and execute tasks in a high-pressure environment.
Calm and adaptable in the face of ambiguity and in a fast-paced, quick-changing environment
Extensive experience working in a team-oriented, collaborative environment as well as working
independently.
Comfortable with multi project multi-tasking consulting Data Architect lifestyle
Excellent interpersonal skills with teams and building trust with clients
Ability to support and work with cross-functional teams in a dynamic environment.
A passion for achieving business transformation; the ability to energise and excite those you work with
Initiative; the ability to work flexibly in a team, working comfortably without direct supervision.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

Similar jobs

Series B funded product startup
Series B funded product startup
Agency job
via Qrata by Blessy Fernandes
Delhi
2 - 5 yrs
₹8L - ₹14L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
skill iconJava

Job Title -Data Scientist

 

Job Duties

  1. Data Scientist responsibilities includes planning projects and building analytics models.
  2. You should have a strong problem-solving ability and a knack for statistical analysis.
  3. If you're also able to align our data products with our business goals, we'd like to meet you. Your ultimate goal will be to help improve our products and business decisions by making the most out of our data.

 

Responsibilities

Own end-to-end business problems and metrics, build and implement ML solutions using cutting-edge technology.

Create scalable solutions to business problems using statistical techniques, machine learning, and NLP.

Design, experiment and evaluate highly innovative models for predictive learning

Work closely with software engineering teams to drive real-time model experiments, implementations, and new feature creations

Establish scalable, efficient, and automated processes for large-scale data analysis, model development, deployment, experimentation, and evaluation.

Research and implement novel machine learning and statistical approaches.

 

Requirements

 

2-5 years of experience in data science.

In-depth understanding of modern machine learning techniques and their mathematical underpinnings.

Demonstrated ability to build PoCs for complex, ambiguous problems and scale them up.

Strong programming skills (Python, Java)

High proficiency in at least one of the following broad areas: machine learning, statistical modelling/inference, information retrieval, data mining, NLP

Experience with SQL and NoSQL databases

Strong organizational and leadership skills

Excellent communication skills

Read more
My Client is the world’s largest media investment company.
My Client is the world’s largest media investment company.
Agency job
via Merito by Jinita Sumaria
Gurugram
5 - 10 yrs
Best in industry
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+4 more

The Client is the world’s largest media investment company. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channels.


Responsibilities of the role:

·      Manage extraction of data sets from multiple marketing/database platforms and perform hygiene and quality control steps, either via Datorama or in partnership with Neo Technology team. Data sources will include: web analytic tools, media analytics, customer databases, social listening tools, search tools, syndicated data, research & survey tools, etc…

· Implement and manage data system architecture

· Audit and manage data taxonomy/classifications from multiple systems and partners

· Manage the extraction, loading, and transformation of multiple data sets

· Cleanse all data and metrics; perform override updates where necessary

· Execute all business rules according to requirements · Identify and implement opportunities for efficiency throughout the process

· Manage and execute thorough QA process and ensure quality and accuracy of all data and reporting deliverables

· Manipulate and analyze “big” data sets synthesized from a variety of sources, including media platforms and marketing automation tools

· Generate and manage all data visualizations and ensure data is presented accurately and is visually pleasing

· Assist analytics team in running numerous insights reports as needed

· Help maintain a performance platform and provide insights and ongoing recommendations around.

 

Requirements :

· 5+ years’ experience in an analytics position working with large amounts of data

· Hands-on experience working with data visualization tools such as Datorama, Tableau, or PowerBI

· Additional desirable skills include tag management experience, application coding experience, statistics background

· Digital media experience background preferred, including knowledge of Doubleclick and web analytics tools

· Excellent communication skills

· Experience with HTML, CSS, Javascript a plus

 

Read more
one-to-one, one-to-many, and many-to-many
one-to-one, one-to-many, and many-to-many
Agency job
via The Hub by Sridevi Viswanathan
Chennai
5 - 10 yrs
₹1L - ₹15L / yr
AWS CloudFormation
skill iconPython
PySpark
AWS Lambda

5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.

2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.

Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.

Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.

Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.

Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.

Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.

Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.

Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.

Experience or exposure to design and development using Full Stack tools.

Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.

Bachelor's or master's degree in computer science or related field.

 

 

Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, in order to enable Merck business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.

The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).

The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:

• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required

This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.

Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Read more
Sixt R&D
Sixt R&D
Agency job
Bengaluru (Bangalore)
5 - 8 yrs
₹11L - ₹14L / yr
SQL
skill iconPython
RESTful APIs
Business Intelligence (BI)
QuickSight
+6 more

Technical-Requirements: 

  • Bachelor's Degree in Computer Science or a related technical field, and solid years of relevant experience. 
  • A strong grasp of SQL/Presto and at least one scripting (Python, preferable) or programming language. 
  • Experience with an enterprise class BI tools and it's auditing along with automations using REST API's. 
  • Experience with reporting tools – QuickSight (preferred, at least 2 years hands on). 
  • Tableau/Looker (both or anyone would suffice with at least 5+ years of hands on). 
  • 5+ years of experience with and detailed knowledge of data warehouse technical architectures, data modelling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding. 
  • 5+ years of demonstrated quantitative and qualitative business intelligence. 
  • Experience with significant product analysis based business impact. 
  • 4+ years of large IT project delivery for BI oriented projects using agile framework. 
  • 2+ years of working with very large data warehousing environment. 
  • Experience in designing and delivering cross functional custom reporting solutions. 
  • Excellent oral and written communication skills including the ability to communicate effectively with both technical and non-technical stakeholders. 
  • Proven ability to meet tight deadlines, multi-task, and prioritize workload. 
  • A work ethic based on a strong desire to exceed expectations. 
  • Strong analytical and challenge process skills.
Read more
Blue Sky Analytics
at Blue Sky Analytics
3 recruiters
Balahun Khonglanoh
Posted by Balahun Khonglanoh
Remote only
1 - 5 yrs
Best in industry
NumPy
SciPy
skill iconData Science
skill iconPython
pandas
+8 more

About the Company

Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!


We are looking for a data scientist to join its growing team. This position will require you to think and act on the geospatial architecture and data needs (specifically geospatial data) of the company. This position is strategic and will also require you to collaborate closely with data engineers, data scientists, software developers and even colleagues from other business functions. Come save the planet with us!


Your Role

Manage: It goes without saying that you will be handling large amounts of image and location datasets. You will develop dataframes and automated pipelines of data from multiple sources. You are expected to know how to visualize them and use machine learning algorithms to be able to make predictions. You will be working across teams to get the job done.

Analyze: You will curate and analyze vast amounts of geospatial datasets like satellite imagery, elevation data, meteorological datasets, openstreetmaps, demographic data, socio-econometric data and topography to extract useful insights about the events happening on our planet.

Develop: You will be required to develop processes and tools to monitor and analyze data and its accuracy. You will develop innovative algorithms which will be useful in tracking global environmental problems like depleting water levels, illegal tree logging, and even tracking of oil-spills.

Demonstrate: A familiarity with working in geospatial libraries such as GDAL/Rasterio for reading/writing of data, and use of QGIS in making visualizations. This will also extend to using advanced statistical techniques and applying concepts like regression, properties of distribution, and conduct other statistical tests.

Produce: With all the hard work being put into data creation and management, it has to be used! You will be able to produce maps showing (but not limited to) spatial distribution of various kinds of data, including emission statistics and pollution hotspots. In addition, you will produce reports that contain maps, visualizations and other resources developed over the course of managing these datasets.

Requirements

These are must have skill-sets that we are looking for:

  • Excellent coding skills in Python (including deep familiarity with NumPy, SciPy, pandas).
  • Significant experience with git, GitHub, SQL, AWS (S3 and EC2).
  • Worked on GIS and is familiar with geospatial libraries such as GDAL and rasterio to read/write the data, a GIS software such as QGIS for visualisation and query, and basic machine learning algorithms to make predictions.
  • Demonstrable experience implementing efficient neural network models and deploying them in a production environment.
  • Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
  • Capable of writing clear and lucid reports and demystifying data for the rest of us.
  • Be curious and care about the planet!
  • Minimum 2 years of demonstrable industry experience working with large and noisy datasets.

Benefits

  • Work from anywhere: Work by the beach or from the mountains.
  • Open source at heart: We are building a community where you can use, contribute and collaborate on.
  • Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
  • Flexible timings: Fit your work around your lifestyle.
  • Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
  • Work Machine of choice: Buy a device and own it after completing a year at BSA.
  • Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
  • Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Read more
Hex Business Innovations
Dhruv Dua
Posted by Dhruv Dua
Faridabad
0 - 4 yrs
₹1L - ₹3L / yr
SQL
SQL server
MySQL
MS SQLServer
skill iconC#
+1 more

Job Summary
SQL development for our Enterprise Resource Planning (ERP) Product offered to SMEs. Regular modifications , creation and validation with testing of stored procedures , views, functions on MS SQL Server.
Responsibilities and Duties
Understanding the ERP Software and use cases.
Regular Creation,modifications and testing of

  • Stored Procedures
  • Views
  • Functions
  • Nested Queries
  • Table and Schema Designs

Qualifications and Skills
MS SQL

  • Procedural Language
  • Datatypes
  • Objects
  • Databases
  • Schema
Read more
Foghorn Systems
at Foghorn Systems
1 recruiter
Abhishek Vijayvargia
Posted by Abhishek Vijayvargia
Pune
0 - 7 yrs
₹15L - ₹50L / yr
skill iconR Programming
skill iconPython
skill iconData Science

Role and Responsibilities

  • Execute data mining projects, training and deploying models over a typical duration of 2 -12 months.
  • The ideal candidate should be able to innovate, analyze the customer requirement, develop a solution in the time box of the project plan, execute and deploy the solution.
  • Integrate the data mining projects embedded data mining applications in the FogHorn platform (on Docker or Android).

Core Qualifications
Candidates must meet ALL of the following qualifications:

  • Have analyzed, trained and deployed at least three data mining models in the past. If the candidate did not directly deploy their own models, they will have worked with others who have put their models into production. The models should have been validated as robust over at least an initial time period.
  • Three years of industry work experience, developing data mining models which were deployed and used.
  • Programming experience in Python is core using data mining related libraries like Scikit-Learn. Other relevant Python mining libraries include NumPy, SciPy and Pandas.
  • Data mining algorithm experience in at least 3 algorithms across: prediction (statistical regression, neural nets, deep learning, decision trees, SVM, ensembles), clustering (k-means, DBSCAN or other) or Bayesian networks

Bonus Qualifications
Any of the following extra qualifications will make a candidate more competitive:

  • Soft Skills
    • Sets expectations, develops project plans and meets expectations.
    • Experience adapting technical dialogue to the right level for the audience (i.e. executives) or specific jargon for a given vertical market and job function.
  • Technical skills
    • Commonly, candidates have a MS or Ph.D. in Computer Science, Math, Statistics or an engineering technical discipline. BS candidates with experience are considered.
    • Have managed past models in production over their full life cycle until model replacement is needed. Have developed automated model refreshing on newer data. Have developed frameworks for model automation as a prototype for product.
    • Training or experience in Deep Learning, such as TensorFlow, Keras, convolutional neural networks (CNN) or Long Short Term Memory (LSTM) neural network architectures. If you don’t have deep learning experience, we will train you on the job.
    • Shrinking deep learning models, optimizing to speed up execution time of scoring or inference.
    • OpenCV or other image processing tools or libraries
    • Cloud computing: Google Cloud, Amazon AWS or Microsoft Azure. We have integration with Google Cloud and are working on other integrations.
    • Decision trees like XGBoost or Random Forests is helpful.
    • Complex Event Processing (CEP) or other streaming data as a data source for data mining analysis
    • Time series algorithms from ARIMA to LSTM to Digital Signal Processing (DSP).
    • Bayesian Networks (BN), a.k.a. Bayesian Belief Networks (BBN) or Graphical Belief Networks (GBN)
    • Experience with PMML is of interest (see www.DMG.org).
  • Vertical experience in Industrial Internet of Things (IoT) applications:
    • Energy: Oil and Gas, Wind Turbines
    • Manufacturing: Motors, chemical processes, tools, automotive
    • Smart Cities: Elevators, cameras on population or cars, power grid
    • Transportation: Cars, truck fleets, trains

 

About FogHorn Systems
FogHorn is a leading developer of “edge intelligence” software for industrial and commercial IoT application solutions. FogHorn’s Lightning software platform brings the power of advanced analytics and machine learning to the on-premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. FogHorn’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as Smart Grid, Smart City, Smart Building and connected vehicle applications.

Press:  https://www.foghorn.io/press-room/">https://www.foghorn.io/press-room/

Awards: https://www.foghorn.io/awards-and-recognition/">https://www.foghorn.io/awards-and-recognition/

  • 2019 Edge Computing Company of the Year – Compass Intelligence
  • 2019 Internet of Things 50: 10 Coolest Industrial IoT Companies – CRN
  • 2018 IoT Planforms Leadership Award & Edge Computing Excellence – IoT Evolution World Magazine
  • 2018 10 Hot IoT Startups to Watch – Network World. (Gartner estimated 20 billion connected things in use worldwide by 2020)
  • 2018 Winner in Artificial Intelligence and Machine Learning – Globe Awards
  • 2018 Ten Edge Computing Vendors to Watch – ZDNet & 451 Research
  • 2018 The 10 Most Innovative AI Solution Providers – Insights Success
  • 2018 The AI 100 – CB Insights
  • 2017 Cool Vendor in IoT Edge Computing – Gartner
  • 2017 20 Most Promising AI Service Providers – CIO Review

Our Series A round was for $15 million.  Our Series B round was for $30 million October 2017.  Investors include: Saudi Aramco Energy Ventures, Intel Capital, GE, Dell, Bosch, Honeywell and The Hive.

About the Data Science Solutions team
In 2018, our Data Science Solutions team grew from 4 to 9.  We are growing again from 11. We work on revenue generating projects for clients, such as predictive maintenance, time to failure, manufacturing defects.  About half of our projects have been related to vision recognition or deep learning. We are not only working on consulting projects but developing vertical solution applications that run on our Lightning platform, with embedded data mining.

Our data scientists like our team because:

  • We care about “best practices”
  • Have a direct impact on the company’s revenue
  • Give or receive mentoring as part of the collaborative process
  • Questions and challenging the status quo with data is safe
  • Intellectual curiosity balanced with humility
  • Present papers or projects in our “Thought Leadership” meeting series, to support continuous learning

 

Read more
LatentView Analytics
at LatentView Analytics
3 recruiters
Kannikanti madhuri
Posted by Kannikanti madhuri
Chennai
1 - 4 yrs
₹2L - ₹10L / yr
Business Intelligence (BI)
Analytics
SQL server
Data Visualization
Tableau
+7 more
Title: Analyst - Business AnalyticsExperience: 1 - 4 YearsLocation: ChennaiOpen Positions: 17Job Description:Roles & Responsibilities:- Designing and implementing analytical projects that drive business goals and decisions leveraging structured and unstructured data.- Generating a compelling story from insights and trends in a complex data environment.- Working shoulder-to-shoulder with business partners to come up with creative approaches to solve the business problem.- Creating dashboards for business heads by exploring available data assets.Qualifications:- Overall 1+ Years of Business Analytics experience with strong communication skills.- Bachelor or Master degree in computer science is preferred.- Excellent problem solving and client orientation skills.Skills Required:- Ability to program in Advanced SQL is must.- Hands-on experience in Modeling tools such as R or Python- Experience in Visualization tools such as Power BI, Tableau, Looker, etc., would be a big plus.- Analytics certifications from recognized platforms would be a plus - Udemy, Coursera, EDX,etc.
Read more
SpotDraft
at SpotDraft
4 recruiters
Madhav Bhagat
Posted by Madhav Bhagat
Noida, NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹3L - ₹24L / yr
skill iconPython
TensorFlow
caffee
We are building the AI core for a Legal Workflow solution. You will be expected to build and train models to extract relevant information from contracts and other legal documents. Required Skills/Experience: - Python - Basics of Deep Learning - Experience with one ML framework (like TensorFlow, Keras, Caffee) Preferred Skills/Expereince: - Exposure to ML concepts like LSTM, RNN and Conv Nets - Experience with NLP and Stanford POS tagger
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos