Dear Candidate,, Greetings of the day! As discussed, Please find the below job description. Job Title : Hadoop developer Experience : 3+ years Job Location : New Delhi Job type : Permanent Knowledge and Skills Required: Brief Skills: Hadoop, Spark, Scala and Spark SQL Main Skills: Strong experience in Hadoop development Experience in Spark Experience in Scala Experience in Spark SQL Why OTSi! Working with OTSi gives you the assurance of a successful, fast-paced career. Exposure to infinite opportunities to learn and grow, familiarization with cutting-edge technologies, cross-domain experience and a harmonious environment are some of the prime attractions for a career-driven workforce. Join us today, as we assure you 2000+ friends and a great career; Happiness begins at a great workplace..! Feel free to refer this opportunity to your friends and associates. About OTSI: (CMMI Level 3): Founded in 1999 and headquartered in Overland Park, Kansas, OTSI offers global reach and local delivery to companies of all sizes, from start-ups to Fortune 500s. Through offices across the US and around the world, we provide universal access to exceptional talent and innovative solutions in a variety of delivery models to reduce overall risk while optimizing outcomes & enabling our customers to thrive in a global economy.OTSI's global presence, scalable and sustainable world-class infrastructure, business continuity processes, ISO 9001:2000, CMMI 3 certifications makes us a preferred service provider for our clients. OTSI has the expertise in different technologies enhanced by our partnerships and alliances with industry giants like HP, Microsoft, IBM, Oracle, and SAP and others. Highly repetitive local company with a proven success of serving the UAE Government IT needs is seeking to attract, employ and develop people with exceptional skills who want to make a difference in a challenging environment.Object Technology Solutions India Pvt Ltd is a leading Global Information Technology (IT) Services and Solutions company offering a wide array of Solutions for a range of key Verticals. The company is headquartered in Overland Park, Kansas, and has a strong presence in US, Europe and Asia-Pacific with a Global Delivery Center based in India. OTSI offers a broad range of IT application solutions and services including; e-Business solutions, Enterprise Resource Planning (ERP) implementation and Post Implementation Support, Application development, Application Maintenance, Software customizations services. OTSI Partners & Practices SAP Partner Microsoft Silver Partner Oracle Gold Partner Microsoft CoE DevOps Consulting Cloud Mobile & IoT Digital Transformation Big data & Analytics Testing Solutions OTSI Honor’s & Awards: #91 in Inc.5000 . Fastest growing IT Companies in Inc.5000…
Data Engineering role at ThoughtWorks ThoughtWorks India is looking for talented data engineers passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Our developers have been contributing code to major organizations and open source projects for over 25 years now. They’ve also been writing books, speaking at conferences, and helping push software development forward -- changing companies and even industries along the way. As Consultants, we work with our clients to ensure we’re delivering the best possible solution. Our Lead Dev plays an important role in leading these projects to success. You will be responsible for - Creating complex data processing pipelines, as part of diverse, high energy teams Designing scalable implementations of the models developed by our Data Scientists Hands-on programming based on TDD, usually in a pair programming environment Deploying data pipelines in production based on Continuous Delivery practices Ideally, you should have - 2-6 years of overall industry experience Minimum of 2 years of experience building and deploying large scale data processing pipelines in a production environment Strong domain modelling and coding experience in Java /Scala / Python. Experience building data pipelines and data centric applications using distributed storage platforms like HDFS, S3, NoSql databases (Hbase, Cassandra, etc) and distributed processing platforms like Hadoop, Spark, Hive, Oozie, Airflow, Kafka etc in a production setting Hands on experience in (at least one or more) MapR, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure HDInsights, Qubole etc.) Knowledge of software best practices like Test-Driven Development (TDD) and Continuous Integration (CI), Agile development Strong communication skills with the ability to work in a consulting environment is essential And here’s some of the perks of being part of a unique organization like ThoughtWorks: A real commitment to “changing the face of IT” -- our way of thinking about diversity and inclusion. Over the past ten years, we’ve implemented a lot of initiatives to make ThoughtWorks a place that reflects the world around us, and to make this a welcoming home to technologists of all stripes. We’re not perfect, but we’re actively working towards true gender balance for our business and our industry, and you’ll see that diversity reflected on our project teams and in offices. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Support to grow as a technologist outside of your role at ThoughtWorks. This is why ThoughtWorkers have written over 100 books and can be found speaking at (and, ahem, keynoting) tech conferences all over the world. We love to learn and share knowledge, and you’ll find a community of passionate technologists eager to back your endeavors, whatever they may be. You’ll also receive financial support to attend conferences every year. An organizational commitment to social responsibility. ThoughtWorkers challenge each other to be just a little more thoughtful about the world around us, and we believe in using our profits for good. All around the world, you’ll find ThoughtWorks supporting great causes and organizations in both official and unofficial capacities. If you relish the idea of being part of ThoughtWorks’ Data Practice that extends beyond the work we do for our customers, you may find ThoughtWorks is the right place for you. If you share our passion for technology and want to help change the world with software, we want to hear from you!
JD: Required Skills: Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala. Strong practical knowledge of SQL.Hands on experience on Spark/SparkSQL Data Structure and Algorithms Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc Experience on NoSQL Databases like HBase, etc Experience with Linux OS environment (Shell script, AWK, SED) Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Job Description: The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities. Who are you? Enthusiast is your middle name. You know what’s new in Big Data technologies and how things are moving Apache is your toolbox and you have been a contributor to open source projects or have discussed the problems with the community on several occasions You use cloud for more than just provisioning a Virtual Machine Vim is friendly to you and you know how to exit Nano You check logs before screaming about an error You are a solid engineer who writes modular code and commits in GIT You are a doer who doesn’t say “no” without first understanding You understand the value of documentation of your work You are familiar with Machine Learning Ecosystem and how you can help your fellow Data Scientists to explore data and create production-ready ML pipelines Eligibility At least 2 years of Data Engineering Experience Have interacted with Customers Must Have Skills: Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES Apache Spark Python Scala PostgreSQL Git Linux Good to have Skills: Apache NiFi Apache Kafka Apache Hive Docker Amazon Certification
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world. While working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc.
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.