What is Contentstack? Contentstack combines the best Content Management System (CMS) and Digital Experience Platform (DXP) technology. It enables enterprises to manage content across all digital channels and create inimitable digital experiences. The Contentstack platform was designed from the ground up for large-scale, complex, and mission-critical deployments. Recently recognized as the Gartner PeerInsights Customers' Choice for WCM, Contentstack is the preferred API-first, headless CMS for enterprises across the globe. What Are We Looking For? Contentstack is looking for a Data Engineer. Roles and responsibilities: Primary responsibilities included designing and scaling ETL pipelines, and ensuring data sanity. Collaborate with multiple groups and produce operational efficiency Develop, construct, test and maintain architectures Align architecture with business requirements Identify ways to improve data reliability, efficiency and quality Optimize database systems for performance and reliability Implementation of model workflows to prepare/analyse/learn/predict and supply the outcomes through API contract(s) Establishing programming patterns, documenting components and provide infrastructure for analysis and execution Set up practices on data reporting and continuous monitoring Provide excellence, open to new ideas and contribute to communities Industrialise the data science models and embed intelligence in product & business applications Find hidden patterns using data Prepare data for predictive and prescriptive modeling Deploy sophisticated analytics programs, machine learning and statistical methods Mandatory Skills 3+ relevant work experience as a Data Engineer Working experience in HDFS, Big table, MR, Spark, Data warehouse, ETL etc.. Advanced proficiency in Java,Scala, SQL, NoSQL Strong knowledge in Shell/Perl/R/Python/Ruby Proficiency in Statistical procedures, Experiments and Machine Learning techniques. Exceptional problem solving abilities Job type – Full time employment Job location – Mumbai/ Pune/ Bangalore/Remote Work schedule – Monday to Friday, 10am to 7pm Minimum qualification – Graduate. Years of experience – 3 + yearsNo of position - 2 Travel opportunities - On need basis within/outside India. Candidate should have valid passport What Really Gets Us Excited About You? Experience in working with product based start-up companies Knowledge of working with SAAS products. What Do We Offer? Interesting Work | We hire curious trendspotters and brave trendsetters. This is NOT your boring, routine, cushy, rest-and-vest corporate job. This is the “challenge yourself” role where you learn something new every day, never stop growing, and have fun while you’re doing it. Tribe Vibe | We are more than colleagues, we are a tribe. We have a strict “no a**hole policy” and enforce it diligently. This means we spend time together - with spontaneous office happy hours, organized outings, and community volunteer opportunities. We are a diverse and distributed team, but we like to stay connected. Bragging Rights | We are dreamers and dream makers, hustlers, and honeybadgers. Our efforts pay off and we work with the most prestigious brands, from big-name retailers to airlines, to professional sports teams. Your contribution will make an impact with many of the most recognizable names in almost every industry including Chase, The Miami HEAT, Cisco, Shell, Express, Riot Games, IcelandAir, Morningstar, and many more! A Seat at the Table | One Team One Dream is one of our values, and it shows. We don’t believe in artificial hierarchies. If you’re part of the tribe, you get a seat at the table. This includes unfiltered access to our C-Suite and regular updates about the business and its performance. Which, btw, is through the roof, so it’s a great time to be joining…
ResponsibilitiesEnsure timely and top-quality product deliveryEnsure that the end product is fully and correctly defined and documentedEnsure implementation/continuous improvement of formal processes to support product development activitiesDrive the architecture/design decisions needed to achieve cost-effective and high-performance resultsConduct feasibility analysis, produce functional and design specifications of proposed new features.· Provide helpful and productive code reviews for peers and junior members of the team.Troubleshoot complex issues discovered in-house as well as in customer environments.Qualifications· Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc.· Expertise in Java, Object Oriented Programming, Design Patterns· Experience in coding and implementing scalable solutions in a large-scale distributed environment· Working experience in a Linux/UNIX environment is good to have· Experience with relational databases and database concepts, preferably MySQL· Experience with SQL and Java optimization for real-time systems· Familiarity with version control systems Git and build tools like Maven· Excellent interpersonal, written, and verbal communication skills· BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent
Technology Skills: Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. Designing and implementing data engineering, ingestion, and transformation functions Good to Have: Experience with Azure Analysis Services Experience in Power BI Experience with third-party solutions like Attunity/Stream sets, Informatica Experience with PreSales activities (Responding to RFPs, Executing Quick POCs) Capacity Planning and Performance Tuning on Azure Stack and Spark.
Role Summary/Purpose:We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions. Requirements: The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment. Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc… Excellent knowledge in SQL & Linux Shell scripting Bachelors/Master’s/Engineering Degree from a well-reputed university. Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment Ability to manage a diverse and challenging stakeholder community Diverse knowledge and experience of working on Agile Deliveries and Scrum teams. Responsibilities Should works as a senior developer/individual contributor based on situations Should be part of SCRUM discussions and to take requirements Adhere to SCRUM timeline and deliver accordingly Participate in a team environment for the design, development and implementation Should take L3 activities on need basis Prepare Unit/SIT/UAT testcase and log the results Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time. Quality delivery and automation should be a top priority Co-ordinate change and deployment in time Should create healthy harmony within the team Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
The Person:- Articulate High Energy Passion to learn High sense of ownership Ability to work in a fast-paced and deadline driven environment Loves technology Highly skilled at Data Interpretation Problem solver Must be able to see how the technology and people together can create stickiness for long term engagements Skills to work in a challenging, complex project environment Need you to be naturally curious and have a passion for understanding consumer behavior A high level of motivation, passion and high sense of ownership Excellent communication skills needed to manage an incredibly diverse slate of work and team personalities Will need to manage multiple projects and deadline-driven fast-paced environment Ability to work in ambiguity and manage chaos Requirement:- Expertise in Python, PySpark, MySQL and AWS 2+ years of recent experience in Data Engineering Tech. or Equivalent degree in CS/CE/IT/ECE/EEE Responsibility:- Build a data pipeline to ingest structured and unstructured data. Candidates should be comfortable implementing an end-to-end ETL pipeline. Must be comfortable with well known JDBC connectors, like MySQL, PostgreSQL, etc. Must be comfortable with both spark and python scripting. Must have extensive experience in AWS Glue, crawler and catalog databases. Candidates should know how triggers work in AWS Glue. Must be comfortable with SQL and HQL(Hive Query Language). Experience with AWS lambda and API Gateway is a plus. Experience with CDI/CDP platforms like segment, mixpanel etc, is a plus. Good to have Data Wrangler, GLUE dynamic dataframe and pyspark workloads on EMR clusters and AWS Step Functions Se/Deserialization techniques Other AWS services such as DMS, Data pipeline, SCT experience
ob Title/Designation:Mid / Senior Big Data EngineerJob Description:Role: Big Data EngineerNumber of open positions: 5Location: PuneAt Clairvoyant, we're building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.Must Have: 4-10 years of experience in software development. At least 2 years of relevant work experience on large scale Data applications. Strong coding experience in Java is mandatory Good aptitude, strong problem solving abilities, and analytical skills, ability to take ownership as appropriate Should be able to do coding, debugging, performance tuning and deploying the apps to Prod. Should have good working experience on o Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet) o Kafka o J2EE Frameworks (Spring/Hibernate/REST) o Spark Streaming or any other streaming technology. Strong coding experience in Java is mandatory Ability to work on the sprint stories to completion along with Unit test case coverage. Experience working in Agile Methodology Excellent communication and coordination skills Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools. Must be able to integrate quickly into the team and work independently towards team goals Role & Responsibilities: Take the complete responsibility of the sprint stories' execution Be accountable for the delivery of the tasks in the defined timelines with good quality. Follow the processes for project execution and delivery. Follow agile methodology Work with the team lead closely and contribute to the smooth delivery of the project. Understand/define the architecture and discuss the pros-cons of the same with the team Involve in the brainstorming sessions and suggest improvements in the architecture/design. Work with other team leads to get the architecture/design reviewed. Work with the clients and counter-parts (in US) of the project. Keep all the stakeholders updated about the project/task status/risks/issues if there are any. Education: BE/B.Tech from reputed institute.Experience: 4 to 9 yearsKeywords: java, scala, spark, software development, hadoop, hiveLocations: Pune
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Description Requirements: Overall experience of 10 years with minimum 6 years data analysis experience MBA Finance or Similar background profile Ability to lead projects and work independently Must have the ability to write complex SQL, doing cohort analysis, comparative analysis etc . Experience working directly with business users to build reports, dashboards and solving business questions with data Experience with doing analysis using Python and Spark is a plus Experience with MicroStrategy or Tableau is a plu
We are looking to hire passionate Java techies who will be comfortable learning and working on Java and any open source frameworks & technologies. She/he should be a 100% hands-on person on technology skills and interested in solving complex analytics use cases. We are working on a complete stack platform which has already been adopted by some very large Enterprises across the world. Candidates with prior experience of having worked in typical R&D environment and/or product based companies with dynamic work environment will be have an additional edge. We currently work on some of the latest technologies like Cassandra, Hadoop, Apache Solr, Spark and Lucene, and some core Machine Learning and AI technologies. Even though prior knowledge of these skills is not mandatory at all for selection, you would be expected to learn new skills on the job.
We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions. Job Responsibilities: ◾Minimum 3 years of experience in Big Data skills required. ◾Complete life cycle experience with Big Data is highly preferred ◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala ◾Excellent communication skills ◾Ability to work independently with no-supervision.
Ixsight Technologies is an innovative IT company with strong Intellectual Property. Ixsight is focused on creating Customer Data Value through its solutions for Identity Management, Locational Analytics, Address Science and Customer Engagement. Ixsight is also adapting its solutions to Big Data and Cloud. We are in the process of creating new solutions across platforms. Ixsight has served over 80+ clients in India – for various end user applications across traditional BFSI and telecom sector. In the recent past we are catering to the new generation verticals – Hospitality, ecommerce etc. Ixsight has been featured in the Gartner’s India Technology Hype Cycle and has been recognised by both clients and peers for pioneering and excellent solutions. If you wish to play a direct part in creating new products, building IP and being part of Product Creation - Ixsight is the place.