Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
Skill Set SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.JD - Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.- Technical expertise with data models, database design and development, data mining and segmentation techniques- Proven success in a collaborative, team-oriented environment- Working experience with geospatial data will be a plus.
Vervali is seeking Data Engineer for Thane, Mumbai Salary: 20 to 30% Hike Notice Period: Immediate or 10 to 20 Days Qualification: BE/B Tech in Computer Science/Information Technology Relevant experience: 4-5 Years of experience in Programming and data transformation tools for ETL. Key Responsibilities: Data warehouse development and ETL design Program using Python, R technologies Build and maintain SQL procedures and ETL processes Design and implement a technical vision for client project Must have: Qualified individuals possess the attributes of being smart, curious,committed to vision, passionate, fun/pleasant, an achiever and having a sense of urgency 5+ years of data focused software development and design experience 3+ years of experience designing and developing ETL solutions using Informatica PowerCenter 9.x version, Matillion, Talend Experience designing and developing database solutions using SQL server and/or Cloud database solutions (Hadoop, Redshift, Snowflake, BigQuery, MySQL, etc.) 3+ years of experience with Python, Bash shell scripting experience with cloud technologies – AWS, GCP, Azure is a plus. Thanks & Regards, Darshit Mandavia
Job Responsibilities : As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision. Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site). Design, develop, test, and support the data warehouse solution. Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration. Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing. Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data. Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality). Maintain and develop all logical and physical data models for enterprise data warehouse (EDW). Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions. Interact with end users/clients and translate business language into technical requirements. Acts independently to expose and resolve problems. Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.Job Requirements : 2+ years experience working in software development & data warehouse development for enterprise analytics. 2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools. Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance. Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation). Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement. Strong verbal and written communication skills with other developers and business clients. Knowledge of Logistics and/or Transportation Domain is a plus. Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.
Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction When was the last time you thought about rebuilding your smartphone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk. We are quite keen to meet you if: You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people. You like experimenting, taking risks and thinking big. 3 things this position is NOT about: This is NOT just a job; this is a passionate hobby for the right kind. This is NOT a boxed position. You will code, clean, test, build and recruit and you will feel that this is not really ‘work’. This is NOT a position for people who like to spend time on talking more than the time they spend doing. 3 things this position IS about: Attention to detail matters. Roles, titles, the ego does not matter; getting things done matters; getting things done quicker and better matters the most. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars? Roles and Responsibilities Drive and define database design and development of real-time complex products. Strive for excellence in customer experience, technology, methodology, and execution. Define and own end-to-end Architecture from definition phase to go-live phase. Define reusable components/frameworks, common schemas, standards to be used & tools to be used and help bootstrap the engineering team. Performance tuning of application and database and code optimizations. Define database strategy, database design & development standards and SDLC, database customization & extension patterns, database deployment and upgrade methods, database integration patterns, and data governance policies. Architect and develop database schema, indexing strategies, views, and stored procedures for Cloud applications. Assist in defining scope and sizing of work; analyze and derive NFRs, participate in proof of concept development. Contribute to innovation and continuous enhancement of the platform. Define and implement a strategy for data services to be used by Cloud and web-based applications. Improve the performance, availability, and scalability of the physical database, including database access layer, database calls, and SQL statements. Design robust cloud management implementations including orchestration and catalog capabilities. Architect and design distributed data processing solutions using big data technologies - added advantage. Demonstrate thought leadership in cloud computing across multiple channels and become a trusted advisor to decision-makers. Desired Skills Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform. Hands-on experience in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Knowledge of NoSQL stores is a plus). Knowledge of other transactional Database Management Systems/Open database system and NoSQL database (MongoDB, Cassandra, Hbase etc.) is a plus. Good knowledge of data management principles like Data Architecture, Data Governance, Very Large Database Design (VLDB), Distributed Database Design, Data Replication, and High Availability. Must have experience in designing large-scale, highly available, fault-tolerant OLTP data management systems. Solid knowledge of any one of the industry-leading RDBMS like Oracle/SQL Server/DB2/MySQL etc. Expertise in providing data architecture solutions and recommendations that are technology-neutral. Experience in Architecture consulting engagements is a plus. Deep understanding of technical and functional designs for Databases, Data Warehousing, Reporting, and Data Mining areas. Education & Experience Bachelors in Engineering or Computer Science (preferably from a premier School) - Advanced degree in Engineering, Mathematics, Computer or Information Technology. Highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! More so if you have been a techie from 12. 2-5 years of experience in database design & development 0- Years experience of AWS or Google Cloud Platform or Hadoop experience Experience working in a hands-on, fast-paced, creative entrepreneurial environment in a cross-functional capacity.
About UpGrad : UpGrad is an online education platform building the careers of tomorrow by offering the most industry relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. UpGrad currently offers programs in Data Science, Big Data, Product Management, Digital Marketing, Entrepreneurship and Management. UpGrad was rated as one of the top 10 most innovative companies in India for 2017 - https://www.fastcompany.com/most-innovative-companies/2017/sectors/india . UpGrad is co-founded by 3 IIT-Delhi and Parthenon alumni and the 4th co-founder is serial entrepreneur Ronnie Screwvala. UpGrad has a committed capital of 100Cr and in the first year of operations, has built the largest revenue generating online program in India (PG Diploma in Data Science) and the largest enrolment online program in India (Start-up India learning program). Position : Senior Data Engineer Position Type : Full Time Location : Mumbai We are looking for an experienced Data Engineer for product and business analytics who will design and build mission critical data pipelines in SQL environment. As a Senior Data Engineer, you will: - Engineer data pipelines ( batch and real-time ) that aids in creation of data-driven products for our platform - Design, develop and maintain a robust and scalable data-warehouse - Work closely alongside Product managers and data-scientists to bring the various datasets together and cater to our business intelligence and analytics use-cases - Design and develop solutions using data science techniques ranging from statistics, algorithms to machine learning - Perform hands-on devops work to keep the Data platform secure and reliable Basic Qualifications - Bachelor's degree in Computer Science, Information Systems, or related engineering discipline - 4+ years’ experience with ETL, Data Mining, Data Modeling, and working with large-scale datasets - 1+ years’ experience with an object-oriented programming language such as Python, C++, Java, etc. - Extremely proficient in writing performant SQL working with large data volumes - Experience with map-reduce concepts - Experience in building automated analytical systems utilizing large data sets. - Familiarity with AWS technologies preferred
DESCRIPTION :- We- re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level.- This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms.- You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions.REQUIREMENTS :- Strong background working on large scale Data Warehousing and Data processing solutions.- Strong Python and Spark programming experience.- Strong experience in building big data pipelines.- Very strong SQL skills are an absolute must.- Good knowledge of OO, functional and procedural programming paradigms.- Strong understanding of various design patterns.- Strong understanding of data structures and algorithms.- Strong experience with Linux operating systems.- At least 2+ years of experience working as a software developer or a data-driven environment.- Experience working in an agile environment.Lots of passion, motivation and drive to succeed!Highly desirable :- Understanding of agile principles specifically scrum.- Exposure to Google cloud platform services such as BigQuery, compute engine etc.- Docker, Puppet, Ansible, etc..- Understanding of digital marketing and digital advertising space would be advantageous.BENEFITS :Datalicious is a global data technology company that helps marketers improve customer journeys through the implementation of smart data-driven marketing strategies. Our team of marketing data specialists offer a wide range of skills suitable for any challenge and cover everything from web analytics to data engineering, data science and software development.Experience : Join us at any level and we promise you'll feel up-levelled in no time, thanks to the fast-paced, transparent and aggressive growth of DataliciousExposure : Work with ONLY the best clients in the Australian and SEA markets, every problem you solve would directly impact millions of real people at a large scale across industriesWork Culture : Voted as the Top 10 Tech Companies in Australia. Never a boring day at work, and we walk the talk. The CEO organises nerf-gun bouts in the middle of a hectic day.Money: We'd love to have a long term relationship because long term benefits are exponential. We encourage people to get technical certifications via online courses or digital schools.So if you are looking for the chance to work for an innovative, fast growing business that will give you exposure across a diverse range of the world's best clients, products and industry leading technologies, then Datalicious is the company for you!
The Last Mile Analytics & Quality Team in Hyderabad is looking for Transportation Quality Specialist who will act as first level support for address, geocode and static route management in Last Mile with multiple Transportation services along with other operational issues and activities related to Transportation process and optimization. Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Internal job description Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Basic qualifications - Bachelors degree in Computer Science or Engineering - Good communication skills- both verbal and written - Demonstrated ability to work in a team - Proficiency in MS Office, SQL, Excel. Preferred qualifications - Experience working with relational databases - Experience with Linux - Debugging and troubleshooting skills, with an enthusiastic attitude to support and resolve customer problems