ears of Exp: 3-6+ Years Skills: Scala, Python, Hive, Airflow, Spark Languages: Java, Python, Shell Scripting GCP: BigTable, DataProc, BigQuery, GCS, Pubsub ORAWS: Athena, Glue, EMR, S3, Redshift MongoDB, MySQL, Kafka Platforms: Cloudera / HortonworksAdTech domain experience is a plus.Job Type - Full Time
Director - Applied AI Who we are? Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud. What do we believe? Best practices are overrated Implementing best practices can only make one an average . Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises 100% of our clients. How do we work ? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self motivated. Self governing teams. We own it. So, what are we hunting for ? To devise strategy through the delivery of sustainable intelligent solutions, strategic customer engagements, and research and development To enable and lead our data and analytics team and develop machine learning and AI paths across strategic programs, solution implementation, and customer relationships To manage existing customers and realize new opportunities and capabilities of growth To collaborate with different stakeholders for delivering automated, high availability and secure solutions To develop talent and skills to create a high performance team that delivers superior products To communicate effectively across the organization to ensure that the team is completely aligned to business objectives To build strong interpersonal relationships with peers and other key stakeholders that will contribute to your team's success Your bucket of Undertakings : Develop an AI roadmap aligned to client needs and vision Develop a Go-To-Market strategy of AI solutions for customers Build a diverse cross-functional team to identify and prioritize key areas of the business across AI, NLP and other cognitive solutions that will drive significant business benefit Lead AI R&D initiatives to include prototypes and minimum viable products Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. Build reusable and scalable solutions for use across the customer base Create AI white papers and enable strategic partnerships with industry leaders Align, mentor, and manage, team(s) around strategic initiatives Prototype and demonstrate AI related products and solutions for customers Establish processes, operations, measurement, and controls for end-to-end life-cycle management of the digital workforce (intelligent systems) Lead AI tech challenges and proposals with team members Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans Identify new business opportunities and prioritize pursuits for AI Education & Experience : Advanced or basic degree (PhD with few years experience, or MS / BS (with many years experience)) in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling 10+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team
Good Understanding of Data structure and alogorithms. Working knowledge of -Scala/Python /Java Good Understanding of big data domain( Hadoop/map reduce- /ETL Architecture Hands on Experience of Hive/spark Experience:-3+yrs Immediate - 15 days max 30 days joiners.
Greetings.. We have urgent requirement for the post of Big Data Architect in reputed MNC company Location: Pune/Nagpur,Goa,Hyderabad/Bangalore Job Requirements: 9 years and above of total experience preferably in bigdata space. Creating spark applications using Scala to process data. Experience in scheduling and troubleshooting/debugging Spark jobs in steps. Experience in spark job performance tuning and optimizations. Should have experience in processing data using Kafka/Pyhton. Individual should have experience and understanding in configuring Kafka topics to optimize the performance. Should be proficient in writing SQL queries to process data in Data Warehouse. Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks. Experience on AWS services like EMR.
work from home is applicablecandidate should have atleast 4 year experiencewell known in full stack developerlocation is in bangalore and puneRelevant skills like java angular springboot react
Job Title/Designation: Technical Manager - Big Data, Data Warehousing, BI Job Description: Location - Pune Experience - 8 + Years Responsibilities: Responsible to work closely with customer to understand the requirements, discuss and define various use cases Liaise with key stakeholders to define the big data solutions roadmap, prioritize the deliverables Responsible for end to end project delivery of Big Data Solutions from project estimations, project planning, resourcing and monitoring perspective Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects Monitor and review the status of the project and ensure that the deliverables are on track with respect to scope, budget and time Transparently communicate the status of the project to all the stakeholders on a regular basis Identify and manage risks / issues related to deliverables and arrive at mitigation plans to resolve the issues and risks Seek proactive feedback continuously to identify areas of improvement Ensure the team is creating and maintaining the knowledge artifacts with reference to the project deliverables Mandatory Skills: Hands on experience in design, development and managing big data technologies Experience of managing projects in the area of Big Data, Data warehousing, Business Intelligence using open source or top of the line tools and technologies Good knowledge of Dimensional Modeling Experience of managing medium to large projects Proven experience in project planning, estimation, execution and implementation of medium to large projects Proficient with various development methodologies like waterfall, agile/scrum and iterative Good Interpersonal skills and excellent communication skills Advanced level Microsoft Project, PowerPoint, Visio, Excel and Word Other Skills: Knowledge of Big Data ecosystem Business Domain Knowledge Project Management Training/Certification such as PMI's Project Management Professional (PMP) or Certified SCRUM Master
ResponsibilitiesEnsure timely and top-quality product deliveryEnsure that the end product is fully and correctly defined and documentedEnsure implementation/continuous improvement of formal processes to support product development activitiesDrive the architecture/design decisions needed to achieve cost-effective and high-performance resultsConduct feasibility analysis, produce functional and design specifications of proposed new features.· Provide helpful and productive code reviews for peers and junior members of the team.Troubleshoot complex issues discovered in-house as well as in customer environments.Qualifications· Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc.· Expertise in Java, Object Oriented Programming, Design Patterns· Experience in coding and implementing scalable solutions in a large-scale distributed environment· Working experience in a Linux/UNIX environment is good to have· Experience with relational databases and database concepts, preferably MySQL· Experience with SQL and Java optimization for real-time systems· Familiarity with version control systems Git and build tools like Maven· Excellent interpersonal, written, and verbal communication skills· BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent
Technology Skills: Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. Designing and implementing data engineering, ingestion, and transformation functions Good to Have: Experience with Azure Analysis Services Experience in Power BI Experience with third-party solutions like Attunity/Stream sets, Informatica Experience with PreSales activities (Responding to RFPs, Executing Quick POCs) Capacity Planning and Performance Tuning on Azure Stack and Spark.
Role Summary/Purpose:We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions. Requirements: The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment. Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc… Excellent knowledge in SQL & Linux Shell scripting Bachelors/Master’s/Engineering Degree from a well-reputed university. Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment Ability to manage a diverse and challenging stakeholder community Diverse knowledge and experience of working on Agile Deliveries and Scrum teams. Responsibilities Should works as a senior developer/individual contributor based on situations Should be part of SCRUM discussions and to take requirements Adhere to SCRUM timeline and deliver accordingly Participate in a team environment for the design, development and implementation Should take L3 activities on need basis Prepare Unit/SIT/UAT testcase and log the results Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time. Quality delivery and automation should be a top priority Co-ordinate change and deployment in time Should create healthy harmony within the team Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
ob Title/Designation:Mid / Senior Big Data EngineerJob Description:Role: Big Data EngineerNumber of open positions: 5Location: PuneAt Clairvoyant, we're building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.Must Have: 4-10 years of experience in software development. At least 2 years of relevant work experience on large scale Data applications. Strong coding experience in Java is mandatory Good aptitude, strong problem solving abilities, and analytical skills, ability to take ownership as appropriate Should be able to do coding, debugging, performance tuning and deploying the apps to Prod. Should have good working experience on o Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet) o Kafka o J2EE Frameworks (Spring/Hibernate/REST) o Spark Streaming or any other streaming technology. Strong coding experience in Java is mandatory Ability to work on the sprint stories to completion along with Unit test case coverage. Experience working in Agile Methodology Excellent communication and coordination skills Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools. Must be able to integrate quickly into the team and work independently towards team goals Role & Responsibilities: Take the complete responsibility of the sprint stories' execution Be accountable for the delivery of the tasks in the defined timelines with good quality. Follow the processes for project execution and delivery. Follow agile methodology Work with the team lead closely and contribute to the smooth delivery of the project. Understand/define the architecture and discuss the pros-cons of the same with the team Involve in the brainstorming sessions and suggest improvements in the architecture/design. Work with other team leads to get the architecture/design reviewed. Work with the clients and counter-parts (in US) of the project. Keep all the stakeholders updated about the project/task status/risks/issues if there are any. Education: BE/B.Tech from reputed institute.Experience: 4 to 9 yearsKeywords: java, scala, spark, software development, hadoop, hiveLocations: Pune
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Description Requirements: Overall experience of 10 years with minimum 6 years data analysis experience MBA Finance or Similar background profile Ability to lead projects and work independently Must have the ability to write complex SQL, doing cohort analysis, comparative analysis etc . Experience working directly with business users to build reports, dashboards and solving business questions with data Experience with doing analysis using Python and Spark is a plus Experience with MicroStrategy or Tableau is a plu
We are looking to hire passionate Java techies who will be comfortable learning and working on Java and any open source frameworks & technologies. She/he should be a 100% hands-on person on technology skills and interested in solving complex analytics use cases. We are working on a complete stack platform which has already been adopted by some very large Enterprises across the world. Candidates with prior experience of having worked in typical R&D environment and/or product based companies with dynamic work environment will be have an additional edge. We currently work on some of the latest technologies like Cassandra, Hadoop, Apache Solr, Spark and Lucene, and some core Machine Learning and AI technologies. Even though prior knowledge of these skills is not mandatory at all for selection, you would be expected to learn new skills on the job.
We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions. Job Responsibilities: ◾Minimum 3 years of experience in Big Data skills required. ◾Complete life cycle experience with Big Data is highly preferred ◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala ◾Excellent communication skills ◾Ability to work independently with no-supervision.
Ixsight Technologies is an innovative IT company with strong Intellectual Property. Ixsight is focused on creating Customer Data Value through its solutions for Identity Management, Locational Analytics, Address Science and Customer Engagement. Ixsight is also adapting its solutions to Big Data and Cloud. We are in the process of creating new solutions across platforms. Ixsight has served over 80+ clients in India – for various end user applications across traditional BFSI and telecom sector. In the recent past we are catering to the new generation verticals – Hospitality, ecommerce etc. Ixsight has been featured in the Gartner’s India Technology Hype Cycle and has been recognised by both clients and peers for pioneering and excellent solutions. If you wish to play a direct part in creating new products, building IP and being part of Product Creation - Ixsight is the place.