We seek a highly motivated individual with demonstrated analytical abilities for the role of Data Scientist based in Delhi. The Data Scientist will analyze programme data and support government stakeholders in scientific research. The Data Scientist will also be responsible for synthesizing data from multiple datasets and creating frameworks for analyzing program data which can be used for informing strategic programmatic decisions. The Data Scientist will also be responsible to propose additional research and program management initiatives supported with data/data collection exercises. The following criteria are mandatory requirements and we strongly encourage that you apply only if you meet all these criteria: Master's/Bachelor’s degree in engineering and or related fields (Courses with Linear Algebra, Calculus, computer programming) with 5+ years’ work experience Experience in integrating ML projects into software systems Proficient in SQL: Ability to write production-grade queries (including nested queries)Understanding of stored procedures, partitioning of tables, AD security, and indexes Proficient in R/Python: Familiarity with control structures, loops, OOPS principles Understanding of notebooks Familiarity with data structures and algorithms - Intermediate Understanding of text operations including regular expressions, string processing Proficient in machine learning techniques Familiarity with Linear and Logistic Regressions Basic experience with tree and tree ensemble learning techniques (Bagging and Boosting) Familiarity with unsupervised learning techniques (knn, Cosine similarity, hierarchical clustering) Proficient in NLP (Named Entity Extraction, POS Tagging, Cosine Similarity, Sequence Networks such as RNN (Recurrent Neural Networks,) BERT, Word2Vec) Experience with distributed systems such as Hadoop / Spark (PySpark / SparkR) Experience in open-source BI tools such as Metabase and D3
Dear Candidate,, Greetings of the day! As discussed, Please find the below job description. Job Title : Hadoop developer Experience : 3+ years Job Location : New Delhi Job type : Permanent Knowledge and Skills Required: Brief Skills: Hadoop, Spark, Scala and Spark SQL Main Skills: Strong experience in Hadoop development Experience in Spark Experience in Scala Experience in Spark SQL Why OTSi! Working with OTSi gives you the assurance of a successful, fast-paced career. Exposure to infinite opportunities to learn and grow, familiarization with cutting-edge technologies, cross-domain experience and a harmonious environment are some of the prime attractions for a career-driven workforce. Join us today, as we assure you 2000+ friends and a great career; Happiness begins at a great workplace..! Feel free to refer this opportunity to your friends and associates. About OTSI: (CMMI Level 3): Founded in 1999 and headquartered in Overland Park, Kansas, OTSI offers global reach and local delivery to companies of all sizes, from start-ups to Fortune 500s. Through offices across the US and around the world, we provide universal access to exceptional talent and innovative solutions in a variety of delivery models to reduce overall risk while optimizing outcomes & enabling our customers to thrive in a global economy.OTSI's global presence, scalable and sustainable world-class infrastructure, business continuity processes, ISO 9001:2000, CMMI 3 certifications makes us a preferred service provider for our clients. OTSI has the expertise in different technologies enhanced by our partnerships and alliances with industry giants like HP, Microsoft, IBM, Oracle, and SAP and others. Highly repetitive local company with a proven success of serving the UAE Government IT needs is seeking to attract, employ and develop people with exceptional skills who want to make a difference in a challenging environment.Object Technology Solutions India Pvt Ltd is a leading Global Information Technology (IT) Services and Solutions company offering a wide array of Solutions for a range of key Verticals. The company is headquartered in Overland Park, Kansas, and has a strong presence in US, Europe and Asia-Pacific with a Global Delivery Center based in India. OTSI offers a broad range of IT application solutions and services including; e-Business solutions, Enterprise Resource Planning (ERP) implementation and Post Implementation Support, Application development, Application Maintenance, Software customizations services. OTSI Partners & Practices SAP Partner Microsoft Silver Partner Oracle Gold Partner Microsoft CoE DevOps Consulting Cloud Mobile & IoT Digital Transformation Big data & Analytics Testing Solutions OTSI Honor’s & Awards: #91 in Inc.5000 . Fastest growing IT Companies in Inc.5000…
JD: Required Skills: Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala. Strong practical knowledge of SQL.Hands on experience on Spark/SparkSQL Data Structure and Algorithms Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc Experience on NoSQL Databases like HBase, etc Experience with Linux OS environment (Shell script, AWK, SED) Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.