Greetings from Intelliswift! Intelliswift Software Inc. is a premier software solutions and Services Company headquartered in the Silicon Valley, with offices across the United States, India, and Singapore. The company has a proven track record of delivering results through its global delivery centers and flexible engagement models for over 450 brands ranging from Fortune 100 to growing companies. Intelliswift provides a variety of services including Enterprise Applications, Mobility, Big Data / BI, Staffing Services, and Cloud Solutions. Growing at an outstanding rate, it has been recognized as the second largest private IT Company in the East Bay. Domains: IT, Retail, Pharma, Healthcare, BFSI, and Internet & E-commerce website https://www.intelliswift.com/ Experience: 4-8 Years Job Location: Chennai Job Description: Skills: Spark, Scala, Big data, Hive · Strong Working experience in Spark, Scala, big data, h base and hive. · Should have good working experience in SQL and Spark SQL. · Good to have knowledge or experience in Teradata. · Familiar with General engineering Git, jenkins, sbt, maven.
What You'll Do :- Develop analytic tools, working on BigData and Distributed Environment. Scalability will be the key- Provide architectural and technical leadership on developing our core Analytic platform- Lead development efforts on product features on Java- Help scale our mobile platform as we experience massive growthWhat we Need :- Passion to build analytics & personalisation platform at scale- 3 to 9 years of software engineering experience with product based company in data analytics/big data domain- Passion for the Designing and development from the scratch.- Expert level Java programming and experience leading full lifecycle of application Dev.- Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage- Strong communication skills, verbal and written
Description At LogMeIn, we build beautifully simple and easy-to-use Cloud-based, cross-platform Web, Mobile and Desktop software products. You probably know us by such industry-defining brand names as GoToMeeting®, GoToWebinar®, JoinMe®, LastPass®, Rescue® and BoldChat® as well as other award winning products and services. LogMeIn enables customers around the world to enjoy highly productive, mobile workstyles. Currently, we’re searching for a high caliber and innovative Big Data and Analytics Engineer who will provide useful insights into the data and enable the stakeholders make Data Driven decisions. He’ll be part of the team building the next generation data platform on Cloud using cutting edge technologies like Spark, Presto, Kinesis, EMR, Pig, Hive, Redshift. If you're passionate about building high quality software for data, thrive in an innovative, cutting-edge startup-like environment, and consider yourself to be a top-notch Data Engineer, then LogMeIn could very well be the perfect fit for you and your career. Responsibilities • Responsible for analysis, design and development activities on multiple projects; plans, organizes, and performs the technical work within area of specialization • Participates in design activity with other programmers on technical aspects relating to the project, including functional specifications, design parameters, feature enhancements, and alternative solutions; • Meets or exceeds standards for the quality and timeliness of the work products that they create (e.g., requirements, designs, code, fixes). • Implements, unit tests, debugs and integrates complex code; designs, writes, conducts, and directs the development of tests to verify the functionality, accuracy, and efficiency of developed or enhanced software; analyzes results for conformance to plans and specifications making recommendations based on the results • Generally provides technical direction and project management within a project/scrum team with increased leadership of others; provides guidance in methodology selection, project planning, the review of work products; may serve in a part-time technical lead capacity to a limited number of junior engineers, providing immediate direction and guidance • Keeps technically abreast of trends and advancements within area of specialization, incorporating these improvements where applicable; attends technical conferences as appropriate Requirements • Bachelor’s degree or equivalent in computer science or related field is preferred, with 5-8 years of directly related work experience • Hands-on experience designing, developing and maintaining high-volume ETL processes using Big Data technologies like Pig, Hive, Oozie, Spark, MapReduce • Solid understanding of Data Warehousing concepts • Strong understanding of Dimensional Data Modeling • Experience in using Hadoop, S3, MapReduce, Redshift, RDS on AWS • Expertise in at least one Visualization tool like Tableau, Quicksight, PowerBI, Sisense, Birst, QlikView, Looker etc. • Experience in working on processing real time streaming data • Strong SQL and Stored Procedure development skills. Knowledge of NoSQL is an added plus • Knowledge of Java to leverage Big Data technologies is desired • Knowledge of scripting language preferably Python or statistical programming language R is desired • Working knowledge of Linux environment • Knowledge of SDLC and Agile development methodologies • Expertise in OOAD principles and methodologies (e.g., UML) and OS concepts • Extensive knowledge and discipline in software engineering process; experience as a technical lead on complex projects, providing guidance on design and development approach • Expertise implementing, unit testing, debugging and integrating code of moderate complexity • Experience helping others to design, write, conduct, and direct the development of tests • Experience independently publishing papers, blogs, and creating and presenting briefings to technical audiences • Strong critical thinking and problem solving skills • Approaches problems with curiosity and open-mindedness
About Us UpGrad is an IIT Delhi alumni and Ronnie Screwvala founded company where we focus on enabling universities to take their programs online. Given team's background in education and media sectors, we understand what it takes to offer quality online programs, and at UpGrad - we invest alongside universities to build and deliver quality online programs (content, platform, technology, industry collaboration, delivery, and grading infrastructure). You can read about some of our press releases at - • UpGrad was earlier selected as one of the top ten most innovative companies in India by FastCompany. • We were also covered by the Financial Times along with other disruptors in Ed-Tech • UpGrad is the official education partner for Government of India - Startup India program too • We were also ranked as one of the top 25 Startups in India 2018 • Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning At UpGrad - we have partnered with leading universities such as IIIT Bangalore, BITS Pilani, MICA Ahmedabad, IMT Ghaziabad and Cambridge University's Judge Business School to offer programs in the domains of Data, Technology and Management. Role and Responsibilities 1. Administration of virtual learning lab: Handle the setup and administration of the virtual labs to be used by the students enrolled in various courses like Big Data, Data Analytics. The students use these labs for practice and also run their assignments. 2. Student experience (post-program launch): Assist students with their academic doubts related to the virtual labs and ensure students have a great learning experience on the UpGrad platform 3. Academic quality assurance: Help create learning material with an in-house team of instructional designers and review its technical quality. What we are looking for: 1. 3-4 years project experience deploying cloud solutions (experience on Amazon Web Services (AWS) is mandatory) 2. Hands-on experience in setting up and day to day administration of Hadoop Ecosystem Tools(Hadoop, Spark, Storm, Hbase), NoSQL, Visualisations, etc. 3. Must be a problem solver with demonstrated experience in solving difficult technology challenges, with a can-do attitude 4. Hands-on working with private or public cloud services in a highly available and scalable production environment. 5. Experience building tools and automation that eliminate repetitive task 6. Hands on experience with Service Cloud, including User Permissions, Roles, Objects, Validation Rules, Process Builder, Workflow Rules, Communities, Visual Workflow, Email to Case, Case Management
Full Stack Developer belongs to Self-Organizing and Cross Functional development team and is able to convert sprint backlog items to shippable product. He collectively owns end to end development responsibility for a given Agile Team / POD. He will design, code and test the user stories committed for a sprint. Works independently under limited supervision. Possess skills to effectively deal with issues,challenges within field of specialization to develop application solutions. Primary Responsibilities: Lead an agile team within a Release Team/Value Stream or IT Support Team. Accountable for team delivery. Develop and automate business solutions by creating new and modifying existing software applications. Develop innovation, strategies, processes, and best practices Technically hands on and excellent in Design, Coding and Testing. Collectively responsible for end to end product quality. Creation of high/low level application design. Participates and contributes in Sprint Ceremonies. Promote and develop the culture of collaboration, accountability & quality. Provides technical support to team. Helps team in resolving technical issues . Closely working Business Teams, Onshore partners, deployment and infrastructure teams. <Others – If any> Required Qualifications: 8 - 13 Years of experience - working on multiple layer of technology Excellent verbal, written and interpersonal communication skills Demonstrate capability to create high/low level designs. Engineering Practices o Agile: Working experience of 2+ year in “Agile team”. Understanding of various agile methodologies such as Scrum, Kanban Working experience of Test Driven Development. o ITIL/ITSM: Good understanding of IT Support / Production Support o Data / Information Security – Working knowledge on the below – Common security vulnerabilities, their causes and implementations to fix the same. Security scanning methodologies and tools (e.g. HP Fortify, Whitehat, Webinspect) o Good in Data Structure, Algorithms and Design Patterns. o Demonstrates excellent problem solving skills. o Good in design thinking and approach to solve business problem by applying suitable technologies (cost efficient, high performance, resilient and scalable). Common Technical Skills o Database: 2+ year working experience of database (SQL or PL/SQL), Good knowledge of. Exposure to Big Data, NoSQL/Flat Database. o API /Web Services: 1+ year working experience in Web Services / API, REST Architecture etc. o DevOps: Working experience in set up or maintenance of CI/CD pipeline (test, build , deployment and monitoring automation) 2+ years working experience of software configuration management and packaging. Experience in using automated deployment and release management tools such as XL Deploy, XL Release, Jenkins. 2+ years working knowledge of build tools such as Maven/Gradle o Cloud: Working experience or good knowledge of cloud platforms (e.g OpenShift, Azure, AWS). Capable of demonstrating how to develop a sample cloud based application / micro- services architecture. o Open Source: Demonstrate hands on knowledge of OpenSource adoption and use cases. Real implementation experience of one or more open source technology (MySQL, JBoss Platform, Apache Camel) Good to have - Contributing to one or more technical forums related to an open source technology. Product / Project / Program Related Tech Stack : o Front End – <Desired Technologies and Tools> o Back End – <Desired Technologies and Tools> o Middleware – <Desired Technologies and Tools> o Testing - <Desired Technologies and Tools> o DevOps - <Desired Tools> o Others – <Desired Technologies and Tools> o Certifications - <Desired Certifications> o Development Methodology / Engineering Practices – Agile (SCRUM / KANBAN / SAFe) Preferred Qualifications: Excellent verbal, written and interpersonal communication skills Ability to work collaboratively in a global team with a positive team spirit Knowledge of US Healthcare domain Knowledge or certification – SAFe Knowledge of certification – ITIL Work experience in product engineering
Position : R & D - Senior EngineerReportsTo : Chief ArchitectExperience : 4+ YearsEducation : BE/ME/MSJob Summary :- We are seeking a highly-skilled, experienced Java developer to join our R & D team. In this role, you will help experiment various Proof of concept's by employing a lot of new and bleeding edge technologies. Compare with other similar technologies and draw merits and demerits. - Demonstrate MVP with small use cases, once reviewed and approved, design and develop a first cut solution that is scalable, relevant, and critical to our company's success and hand over to engineering team to take it forward and guide them to make it full-fledged product/service/solution. - You will focus on Java/ Java EE / Python development throughout and must have a solid skill set, problem solving ability, analytical thinking and a desire to continue to grow as a developer, and a team-player mentality. POC involves experimenting with bleeding edge technologies across languages like Java and Python. Duties and Responsibilities :- Provide solution's in terms of new technology/tool/service for current technology bottlenecks of the product(s)- Work on Proof of concepts for product / business requirements by employing latest technologies to understand it's fitment in product's in technology stack and evaluate its merits and demerits.- Gather requirements from internal and external stakeholders- Participate in the design and implementation of essential applications- Demonstrate expertise and add valuable input throughout the POC/development lifecycle- Help design and implement scalable, lasting technology solutions- Review current systems, suggesting updates as needed- Test and debug new applications and updates- Resolve reported issues and reply to queries in a timely manner- Develop and utilize technical change documentation- Strive to deploy all products and updates on time- Help improve code quality by implementing recommended best practices- Remain up to date on all current best practices, trends, and industry developments- Maintain a high standard of work quality and encourage others to do the same- Help junior team members grow and develop their skills- Identify potential challenges and bottlenecks to address them proactivelyRequirements and Qualifications :- BS/MS/MTech in computer science or related field required- Minimum 4 years of experience in reputed Software firm- Strong knowledge on computer science fundamentals like Algorithms and Data structures- Strong problem thinking and analytical thinking capability- Strong working knowledge of Java and J2EE technologies- Significant experience working with SQL - Significant experience working with NoSQL like mongo/dynamo/memsql/graph DB- Significant experience working with Elastic Cache- Significant experience working with Distributed Architecture- Knowledge or working experience in Python- Significant experience working with Web Services, REST Frameworks- Experience with AWS (S3, Lambda, Kinesis, SQS) highly desired- Experience with frameworks like Spring, Hadoop, Spark, Kafka a plus- Experience with Machine Learning, NLP a plus- Familiarity with Elasticsearch- Familiarity with Java web application servers like Tomcat, Weblogic, Jboss- Familiarity with micro services and/or Spring Boot- Familiarity with HTML, CSS, Java script- Having hobby projects is a plusManthan Profile :Manthan is the Chief Analytics Officer for consumer industries worldwide. Manthan's portfolio of analytics-enabled business applications, advanced analytics platforms and solutions are architected to help users across industries walk the complete data-to-result path - analyze, take guided decisions and execute these decisions real-time. Sophisticated, yet intuitive analytical capability coupled with the power of big data, mobility and cloud computing, brings users business-ready applications that provide on-demand access and real-time execution - the only path to profit in a contemporary, on-demand and connected economy. Manthan serves over 200 leading organizations across 23 countries. With the recent introduction of Maya, the world's first AI powered conversational agent for business analytics, Manthan is pioneering the move to zero touch UIs and transforming user interactions with complex analytics applications. Manthan is one of the most awarded analytics innovators among analysts and customers alike. To learn how businesses can gain from analytics, please visit https://www.manthan.com
• 5-6 years of experience in building scalable web services and applications with service-oriented architecture. • Strong experience in Java/Scala programming language, writing performant, scalable and unit tested code. • Strong expertise in technologies / platforms: Springboot, JPA, No SQL (Cassandra) • Strong expertise in front end technologies - Angular JS. • Good object oriented design skills, and strong knowledge of design patterns. • Solid understanding of software deployment and infrastructure tuning on a cloud computing infrastructure (such as Amazon Web Services). • Strong experience with relational databases such as MariaDB/MySQL. • Proven commitment to quality and an ability to create maintainable and extensible code. • Strong in fundamentals of computer science and engineering. • Experience working with Agile software methodologies. • Experience in leading projects and mentoring people towards achieving high quality software. • Experience with Nginx, Tomcat, Redis, Cassandra,Zookeeper, ActiveMQ and Hadoop is a plus. • Bachelors or Masters in Computer Science engineering or related discipline.
About Us DataWeave is a Data Platform which aggregates publicly available data from disparate sources and makes it available in the right format to enable companies take strategic decisions using trans-firewall Analytics. It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Read more on Become a DataWeaver Skills and Requirements: ● Good communication and collaboration skills. ● Ability to code and script with strong grasp of CS fundamentals, excellent problem-solving abilities. ● Comfortable with at least one coding language, Python would be a plus. ● Good understanding of RDBMS ● Experience in building Data pipelines and processing large datasets is a plus. ● Knowledge of building crawlers is a plus. ● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra would be a plus. · Growth at DataWeave ● Based on the performance, permanent employment will be offered between 3-6 months in to the Internship. ● You have the opportunity to work in many different areas and explore wide variety of tools to figure out what really excites you. ● Competitive Salary Packages.
Job Description In this role you will help us build, improve and maintain our huge data infrastructure where we collect TB's of logs daily. Data driven decisioning is crucial to the success of our customers and this role is central to ensuring we have a cutting edge data infrastructure to do things faster, better, and cheaper! Experience 1 - 3 Years Required Skills -Must be a polyglot with good command over Java, Scala and a scripting language -A non trivial project experience in distributed computing frameworks like Apache Spark/Hadoop/Pig/Kafka/Storm with sound knowledge of their internals -Expert knowledge of relational databases like MYSQL, and in-memory data stores like Redis -Regular participation in coding/hacking contests like Top-Coder, Code-Jam and Hacker-Cup is a huge plus Pre requisites -Strong analytical skills and solid foundation in Computer Science fundamentals specially in -DataStructures/Algorithms, Object Oriented principles, Operating Systems, Computer Networks -Ability and willingness to take ownership and work under minimum supervision, independently or as a part of a team -Passion for innovation and "Never Say Die" attitude -Strong verbal and written communication skills Education BTech/M.Tech/MS/Dual in Computer Science with above average academic credentials
Roles and Responsibilities: ● Inclined towards working in a start-up like environment. ● Comfort with frequent, incremental code testing and deployment, Data management skills ● Design and Build robust and scalable data engineering solutions for structured and unstructured data for delivering business insights, reporting and analytics. ● Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall system performance. ● Build robust API’s that powers our delivery points (Dashboards, Visualizations and other integrations). Skills and Requirements: ● Good communication and collaboration skills with 3-7 years of experience. ● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities. ● Comfort with frequent, incremental code testing and deployment, Data management skills ● Good understanding of RDBMS ● Experience in building Data pipelines and processing large datasets. ● Knowledge of building crawlers and data mining is a plus. ● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra (data stores) would be a plus.
Responsibilities: You will interact directly with colleagues across all responsibility areas and Director Of Engineering. The successful candidate for this position: - Designs and implements well-architected and scalable solutions - Collaborate with various teams in releasing high-quality software - Performs code reviews and contributes to healthy coding conventions - Assists in integration with customer systems - Provides timely responses to internal technical questions - Demonstrates leadership skills in navigating through tense periods and keeping calm Our Culture: - Integrity and motivation is more important than skill and experience - Cross-company team building and collaboration - Diverse background and highly talented & passionate group of individuals Ideal Candidate: The ideal candidate is a senior engineer having substantial development experience and high standards for code quality & maintainability. Basic Qualifications: - 4-year degree in Computer Science or Computer Engineering Preferred Qualifications: - 5+ years of development experience - Experience in Java or Scala - Experience with all parts of SDLC including CI/CD and testing methodologies - Experience in working with NoSQL technologies and message queue management - Self-motivated and able to work with minimum guidance. - Experience in a startup or rapid-growth product or project - Comfortable with modern version control, and agile development Bonus Points: - Experience in working with micro-services, containers or big data technologies - Working knowledge of cloud technologies like GCE and AWS - Writes blog posts and has a strong record on StackOverflow and similar sites
This role will be responsible for developing and deploying a game-changing and highly-disruptive advertising technology platform. This person would also take on the following responsibilities: Gather and process raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) Work closely with our engineering team to integrate your amazing innovations and algorithms into our production systems Support business decisions with ad hoc analysis as needed Propose and investigate new techniques Troubleshoot production issues and identify practical solutions Routine check-up, back-up and monitoring of the entire MySQL and Hadoop ecosystem Take end-to-end responsibility of the Traditional Databases (MySQL), Big Data ETL, Analysis and processing Life Cycle in the organization Build, deploy and maintain real-time streaming pipelines and real-time analytics Manage deployments of big-data clusters across private and public cloud platforms
It is my pleasure to introduce you to IDEAS2IT Technologies, Chennai. If you are looking for a challenging position as a BigData engineer solving complex business problems by applying the latest in Data Science, Machine Learning and AI read on. Ideas2IT is a high-end product engineering firm that rolls out its own products and also helps Silicon Valley firms with their product engineering. We are looking for above average programmers to be part of our Data Science Lab. You will be working on projects like An AI platform built using Google TensorFlow for a predictive hiring product. Betting odds platform that to match odds offered to leverage spreads PPO platform for predictive pricing and promotions for enterprise eCommerce. Part of your tool set will be Google TensorFlow, Python ML frameworks, Apache Spark, R, Google BigQuery, Scala / Octave, Kafka and so on. If you have any relevant experience great! If not, it doesn't matter. We believe in hiring people with high IQ and the right attitude over ready-made skills. As long as you are passionate about building world class enterprise products and understand whatever technology that you are working on in-depth, we will bring you up to speed on all the technologies we use. Oh BTW, did we mention that you need to be super smart? Sounds interesting? Ideas2IT is a high-end product firm. Started by an ex-Googler, Murali Vivekanandan, we count Siemens, Motorola, eBay, Microsoft and Zynga among our clients. We solve some very interesting problems in the USA startup ecosystem and have created great products in the process. When we build, we build great! We actively contribute to open source projects. We've built our own frameworks. We're betting the house on Big Data, and with a Stanford grad leading the team, we're sure to win. We have rolled 2 of our products as separate companies last year and raised institutional funds - Pipecandy, Idearx.
Dear Candidate, Please find below details : Ruby on Rails Developer Years of experience- 3 to 6 years Required Skills Ruby, Ruby on Rails, Experience in developing Web application using Ruby, RoR Databases: PostgreSQL Added advantages if candidates knows REST OS: Linux Please share your details across firstname.lastname@example.org with below details Total Exp: Rel Exp: Current CTC: Expected CTC: Notice Period: Niyuj is a product engineering company that engages with the customer at different levels in the product development lifecycle in order to build quality products, on budget and on time. Founded in 2007 by passionate technology leader, Stable and seasoned leadership with hands-on experience working or consulting companies from bootstrapped start-ups to large multinationals. Global experience in US, Australia & India, Worked with fortune 500 companies to prominent startups, clients include Symantec, Vmware, Carbonite, Edgewater networks Domain Areas we work for : CLOUD SERVICES - Enterprises are rushing to incorporate cloud computing, big data, and mobile into their IT infrastructures. BIG-DATA ANALYTICS - Revolutionizing the way Fortune 1000 companies harness billions of data and turn it into a competitive advantage. NETWORK AND SECURITY - Network and security-related system level work that meets customer demands and deliver real value Our Prime customer, Carbonite, is Americas #1 cloud backup and Storage Company, with over 1.5 million customers and headquartered in Boston MA, with offices in 15 locations across the world. Your potential for exponential growth: Your experience and expertise would be a great addition to our team, and you will have an opportunity to work closely with industry leaders, literally sitting across the table and jointly building the future with folks who are noted gurus and industry veterans from prestigious institutions like IIT's and top US universities with industry experience in fortune 500 companies like EMC, Symantec and VERITAS.
As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to deploy and release their code seamlessly.Across teams, we will look up to you to make key decisions for our infrastructure, networking and security. You will also own, scale, and maintain the compute and storage infrastructure for various product teams.The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems. They understand what it takes to work in a startup environment and have the zeal to establish a culture of infrastructure awareness and transparency across teams and products. They fail fast, learn faster, and execute almost instantly.Technology Stack: Configuration management tools (Ansible/Chef/Puppet), Cloud Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.WHY YOU?* Because you love to take ownership of the infrastructure allowing developers to deploy and manage microservices at scale.* Because you love tinkering with and building upon new tools and technologies to make your work easier and streamlined with the industry's best practices.* Because you have the ability to analyze and optimize performance in high-traffic internet applications.* Because you take pride in building scalable and fault-tolerant infrastructural systems.* Because you see explaining complex engineering concepts and design decisions to the less tech savvy as an interesting challenge.IN MONTH 1, YOU WILL...* Learn about the products and internal tools that power our data intelligence platform.* Understand the underlying infrastructure and play around with the tools used to manage it.* Get familiar with the current architectural challenges arising from handling data and web traffic at scale.IN MONTH 3, YOU WILL...* Become an integral part of the architectural decisions taken across teams and products.* Play a pivotal role in establishing a culture of infrastructure awareness and transparency across the company.* Become the go-to person for engineers to get help solving issues with performance and scale.IN MONTH 6 (AND BEYOND), YOU WILL...* Hire a couple of engineers to strengthen the team and build systems to help manage high-volume and high-velocity data.* Be involved, along with the DevOps team, in tasks ranging from tracking statistics and managing alerts to deploying new hosts and debugging intricate production issues.About SocialCopsSocialCops is a data intelligence company that is empowering leaders in organizations globally including the United Nations & Unilever. Our platform powers over 150 organizations across 28 countries. As a pioneering tech startup, SocialCops was recognized in the list of Technology Pioneers 2018 by World Economic Forum and by the New York Times in the list of 30 global visionaries. We were also part of the Google Launchpad Accelerator 2018. Aasaan jobs named SocialCops as one of the best Indian startups to work for in 2018.Read more about our work and case studies: https://socialcops.com/case-studies/Watch our co-founder's TEDx talk on how big data can influence decisions that matter: https://www.youtube.com/watch?v=C6WKt6fJisoWant to know how much impact you can drive in under a year at SocialCops? See our 2017 year in review: https://socialcops.com/2017/For more information on our hiring process, check out our blog: https://blog.socialcops.com/inside-sc/team-culture/interested-joining-socialcops-team-heres-need/
ITTStar global services is subsidiary unit in Bengaluru with head office in Atlanta, Georgia. We are primarily into data management and data life cycle solutions, which includes machine learning and artificial intelligence. For further info, visit ITTstar.com . As discussed over the call, I am forwarding the job description. We are looking for enthusiastic and experienced data engineers to be part of our bustling team of professionals for our Bengaluru location. JOB DESCRIPTION: 1. Experience in Spark & Big Data is mandatory. 2. Strong Programming Skills in Python / Java / Scala /Node.js. 3. Hands on experience handling multiple data types JSON/XML/Delimited/Unstructured. 4. Hands on experience working at least one Relational and/or NoSQL Databases. 5. Knowledge on SQL Queries and Data Modeling. 6. Hands on experience working in ETL Use cases either in On-premise or Cloud. 7. Experience in any Cloud Platform (AWS, Azure, GCP, Alibaba). 8. Knowledge in one or more AWS Services like Kinesis, EC2, EMR, Hive Integration, Athena, FireHose, Lambda, S3, Glue Crawler, Redshift, RDS is a plus. 9. Good Communication Skills and Self Driven - should be able to deliver the projects with minimum instructions from Client.
Job Skill Requirements: • 4+ years of experience building and managing complex products/solutions • 2+ experience in DW/ELT/ETL technologies-Nice to have • 3+ years of hands on development experience using Big Data Technologies like: Hadoop, SPARK • 3+ years of hands on development experience using Big Data eco system components like: Hive, Impala,HBase, Sqoop, Oozie etc… • Proficient level programming in Scala. • Good to have hands on experience building webservices in Python/Scala stack. • Good to have experience developing Restful web services • Knowledge of web technologies and protocols (NoSQL/JSON/REST/JMS)
• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.
Analytical Skills: Have to work with large amounts of data. You will need to see through the data and analyze it to find conclusions. Need maths skills to estimate numerical data. Communication Skills: You will need to write and speak clearly, easily communicating complex ideas. Critical Thinking: Must look at the numbers, trends, and data and come to new conclusions based on the findings. Attention to Detail: So be vigilant in the analysis to come to correct conclusions. Programming Skills: Should know python programming, R, Mysql, and Hadoop. Deep learning Skills: Should have worked on machine learning & deep learning.
Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background
Job Title: Software Developer – Big Data Responsibilities We are looking for a Big Data Developer who can drive innovation and take ownership and deliver results. • Understand business requirements from stakeholders • Build & own Mintifi Big Data applications • Be heavily involved in every step of the product development process, from ideation to implementation to release. • Design and build systems with automated instrumentation and monitoring • Write unit & integration tests • Collaborate with cross functional teams to validate and get feedback on the efficacy of results created by the big data applications. Use the feedback to improve the business logic • Proactive approach to turn ambiguous problem spaces into clear design solutions. Qualifications • Hands-on programming skills in Apache Spark using Java or Scala • Good understanding about Data Structures and Algorithms • Good understanding about relational and non-relational database concepts (MySQL, Hadoop, MongoDB) • Experience in Hadoop ecosystem components like YARN, Zookeeper would be a strong plus
Responsibilities: Design and develop ETL Framework and Data Pipelines in Python 3. Orchestrate complex data flows from various data sources (like RDBMS, REST API, etc) to the data warehouse and vice versa. Develop app modules (in Django) for enhanced ETL monitoring. Device technical strategies for making data seamlessly available to BI and Data Sciences teams. Collaborate with engineering, marketing, sales, and finance teams across the organization and help Chargebee develop complete data solutions. Serve as a subject-matter expert for available data elements and analytic capabilities. Qualification: Expert programming skills with the ability to write clean and well-designed code. Expertise in Python, with knowledge of at least one Python web framework. Strong SQL Knowledge, and high proficiency in writing advanced SQLs. Hands on experience in modeling relational databases. Experience integrating with third-party platforms is an added advantage. Genuine curiosity, proven problem-solving ability, and a passion for programming and data.
Looking for extremely smart software engineers who can solve complex distributed software issues. Someone who has handled lots of structured and unstructured data is preferred Description: • Bachelor’s degree in computer science or related discipline and experience of at least 6 years. • Strong experience with Java, J2EE, Spring, maven • Hands-on experience with SOAP and RESTful based web services • Hands-on experience on backend technologies (Cassandra, Kafka, Storm etc.) • Experience with building self-healing, automatic fault detection and recovery mechanisms is good to have..
RESPONSIBILITIES: 1. Full ownership of Tech right from driving product decisions to architect to deployment. 2. Develop cutting edge user experience and build cutting edge technology solutions like instant messaging in poor networks, live-discussions, live-videos optimal matching. 3. Using Billions of Data Points to Build User Personalization Engine. 4. Building Data Network Effects Engine to increase Engagement & Virality. 5. Scaling the Systems to Billions of Daily Hits. 6. Deep diving into performance, power management, memory optimization & network connectivity optimization for the next Billion Indians. 7. Orchestrating complicated workflows, asynchronous actions, and higher order components. 8. Work directly with Product and Design teams. REQUIREMENTS: 1. Should have Hacked some (computer or non-computer) system to your advantage. 2. Built and managed systems with a scale of 10Mn+ Daily Hits 3. Strong architectural experience. 4. Strong experience in memory management, performance tuning and resource optimizations. 5. PREFERENCE- If you are a woman or an ex-entrepreneur or having a CS bachelors degree from IIT/BITS/NIT. P.S. If you don't fulfill one of the requirements, you need to be exceptional in the others to be considered.
Description Must have Direct Hands- on, 4 years of experience, building complex Data Science solutions Must have fundamental knowledge of Inferential Statistics Should have worked on Predictive Modelling, using Python / R Experience should include the following, File I/ O, Data Harmonization, Data Exploration Machine Learning Techniques (Supervised, Unsupervised) Multi- Dimensional Array Processing Deep Learning NLP, Image Processing Prior experience in Healthcare Domain, is a plus Experience using Big Data, is a plus Should have Excellent Analytical, Problem Solving ability. Should be able to grasp new concepts quickly Should be well familiar with Agile Project Management Methodology Should have excellent written and verbal communication skills Should be a team player with open mind
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
Responsibilities Ensure timely and top-quality product delivery Ensure that the end product is fully and correctly defined and documented Ensure implementation/continuous improvement of formal processes to support product development activities Drive the architecture/design decisions needed to achieve cost-effective and high-performance results Conduct feasibility analysis, produce functional and design specifications of proposed new features. · Provide helpful and productive code reviews for peers and junior members of the team. Troubleshoot complex issues discovered in-house as well as in customer environments. Qualifications · Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc. · Expertise in Java, Object Oriented Programming, Design Patterns · Experience in coding and implementing scalable solutions in a large-scale distributed environment · Working experience in a Linux/UNIX environment is good to have · Experience with relational databases and database concepts, preferably MySQL · Experience with SQL and Java optimization for real-time systems · Familiarity with version control systems Git and build tools like Maven · Excellent interpersonal, written, and verbal communication skills · BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent
Description Who We Are Bridge International Academies is the world s largest and fastest -growing chain of primary and pre -primary schools with more than 500 academies and 100,000 pupils in Kenya, Uganda, Nigeria, India, and Liberia. We democratize the right to succeed by giving families living in poverty access to the high -quality education that will allow their children to live a very different life. We leverage experts, data, and technology in order to standardize and scale every aspect of quality education delivery, from how and where academies are built to how teachers are selected and trained, and how lessons are delivered and monitored for improvement. We are vertically -integrated, tech -enabled, and on our way to profitability. Bridge expects to continue rapid expansion in 2018 across existing markets. The Bridge Offer Roughly 2.7 billion people live on less than $2 /day. In their communities, there is a huge gap between the education offered and the needs of the population. Too often the schools available to them fail to deliver for these families. The quality offered results in the average pupil from our communities in East Africa failing to reach proficiency in primary school and on average fail the primary exit exams that are critical to their development. Teachers are unresponsive and occasionally abusive, and fees are often unaffordable. Even government schools can cost families a significant amount of money after all the additional fees are added up. With 47% of classroom teaching time lost due to teacher absenteeism or neglect, 55% of families in our communities end up choosing private schools instead, but then fear for the stability and sustainability of their choice as many schools close after only a few years of service. Both the government schools and the private schools tend to lack well -conceived scope and sequences, instructional materials, student achievement data, and the capacity to react to that data. Families are actively searching for a better academic alternative. Enter Bridge International Academies. As of September 2017, Bridge operates more than 500 academies, serving roughly 100,000 pupils in Kenya, Uganda, Nigeria, India, and Liberia. Bridge utilises a scripted -learning education methodology coupled with 'big data' (all teachers have tablets for instruction, assessment, and data -gathering) that allows us to make curriculum a little better every day. With plans to enrol ten million students ten years from now, Bridge International Academies offers a tremendous opportunity to grow with one of the world s most exciting, ambitious, and socially conscious companies, with leadership roles available across a number of competencies and geographies. Tech at Bridge Technology plays a critical role at Bridge in enabling us to provide education at massive scale and low cost - it's one of the key elements that gives us the ability to deliver what no one else can. Tech spans several key functions, from the hardware and software that our academies use to run all aspects of teaching and management, including mobile payments, to the systems that enable our country headquarters to manage massive local operations, to the data backbone that informs all of our strategic and tactical decision making. It s a lot of custom software development and a lot of back office systems. We've got a ridiculously ambitious mission at Bridge, and it's a place where passionate technologists have a chance to directly change the world. No kidding. About the Role Tech at Bridge is a highly complex, vertically -integrated affair, with systems supporting an ever expanding range of functions and countries, and crossing between software development, IT operations, academy operations, and logistics /supply chain. At the same time, our teams run lean and things change fast - governments make policy decisions that affect us, launching new countries is a frenetic affair, and we still need to evolve our core technology offering. We are looking for a full time Senior Software Engineer to join our new Hyderabad -based cross -functional software development team, which will participate in building the software that powers and improves efficiency to enhance our competitive advantage. This person should be familiar with design and implementation issues specific to a data driven, highly scalable environments and be able to handle such issues with flexibility and ingenuity. The ideal candidate will have a strong customer focus, a proven track record of delivering high -quality products in a continuous delivery environment, and an appreciation for clean and simple code. Bridge especially values T -shaped team members - individuals with deep expertise in particular areas, but comfortable working across all parts of the technology stack. What You Will Do Assume ownership over the server -side architecture of the Bridge software platforms Design, implement, and support new products and features Analyse and improve the server -side architecture with a focus on maintainability and scalability Mentor and guide junior engineers, including performing code reviews Collaborate with project sponsors to elaborate requirements and facilitate trade -offs that maximise customer value Work with product and development teams to establish overall technical direction and product strategy What You Will Have You have a BA /BS in Computer Science or related technical field. You have 6 years of enterprise software development experience. You are comfortable recommending and advocating for enterprise architectural best practices for highly -available, scalable, and reliable implementations. You have direct experience integrating off -the -shelf and custom built software, and understand the trade -offs between building and buying software. You function well in a fast -paced, informal environment where constant change is the norm and the bar for quality is set high. You have enterprise -level experience with continuous delivery practices and tools (e.g Jenkins, Bamboo, GoCD, Octopus). Proficiency in test -driven development (TDD) and /or behaviour driven development (BDD) is required. You are in expert in four or more of the following areas and interested in learning the rest: C# /.NET Web services (esp. WebAPI or NancyFx; Richardson L2 ) Cloud environments (esp. AWS) and architectures /implementations (e.g. CQRS /ES, circuit breakers, messaging, etc.) Enterprise application performance monitoring (e.g. E.L.K., Nagios, NewRelic, Riverbed) System security (e.g. OWASP, OAuth) Infrastructure -as -Code (e.g. Puppet, Chef, Ansible, Docker, boxstarter, chocolatey /WinRM /powershell). MS SQL Server /T -SQL You must have worked in an agile delivery environment and understand not only the mechanics, but also the underlying motivations. Bridge is primarily a .NET shop (server -side), so experience in this area is preferable; however, Bridge also values developers with diverse experience, so serious exposure to other languages and ecosystems (e.g. NodeJS, Ruby, functional languages, NoSQL DBs) is a bonus. Bridge is a strong supporter of open source projects - familiarity with OSS projects is a plus; contributions to open source projects is a big plus. You re also A detailed doer - You have a track record of getting things done. You re organized and responsive. You take ownership of every idea you touch and execute it to a fine level of detail, setting targets, engaging others, and doing whatever it takes to get the job done. You can multi -task dozens of such projects at once and never lose sight of the details. Likely, you have some experience in a start -up or other rapid -growth company. A networking mastermind - You excel at meeting new people and turning them into advocates. You communicate in a clear, conscientious, and effective way in both written and oral speech. You can influence strangers in the course of a single conversation. Allies and colleagues will go to bat for your ideas. A creative problem -solver - Growing any business from scratch comes with massive and constant challenges. On top of that, Bridge works in volatile, low -resource communities and runs on fees averaging just $6 a month per pupil. You need to be flexible and ready to get everything done effectively, quickly, and affordably with the materials at hand. Every dollar you spend is a dollar our customers, who live on less than $2 a day, will have to pay for. A customer advocate - Our customers - these families living on less than $2 a day per person - never leave your mind. You know them, get them, have shared a meal with them (or would be happy to in the future). You would never shrink back from shaking a parent s hand or picking up a crying child, no matter what the person was wearing or looked like. Every decision you make considers their customer benefit, experience, and value. A life -long learner - You believe you can always do better. You welcome constructive criticism and provide it freely to others. You know you only get better tomorrow when others point out where you ve missed things or failed today.
As a Big Data Engineer, you will build utilities that would help orchestrate migration of massive Hadoop/Big Data systems onto public cloud systems. You would build data processing scripts and pipelines that serve several of jobs and queries per day. The services you build will integrate directly with cloud services, opening the door to new and cutting-edge re-usable solutions. You will work with engineering teams, co-workers, and customers to gain new insights and dream of new possibilities. The Big Data Engineering team is hiring in the following areas: • Distributed storage and compute solutions • Data ingestion, consolidation, and warehousing • Cloud migrations and replication pipelines • Hybrid on-premise and in-cloud Big Data solutions • Big Data, Hadoop and spark processing Basic Requirements: • 2+ years’ experience of Hands-on in data structures, distributed systems, Hadoop and spark, SQL and NoSQL Databases • Strong software development skills in at least one of: Java, C/C++, Python or Scala. • Experience building and deploying cloud-based solutions at scale. • Experience in developing Big Data solutions (migration, storage, processing) • BS, MS or PhD degree in Computer Science or Engineering, and 5+ years of relevant work experience in Big Data and cloud systems. • Experience building and supporting large-scale systems in a production environment. Technology Stack: Cloud Platforms – AWS, GCP or Azure Big Data Distributions – Any of Apache Hadoop/CDH/HDP/EMR/Google DataProc/HD-Insights Distributed processing Frameworks – One or more of MapReduce, Apache Spark, Apache Storm, Apache Flink. Database/warehouse – Hive, HBase, and at least one cloud-native services Orchestration Frameworks – Any of Airflow, Oozie, Apache NiFi, Google DataFlow Message/Event Solutions – Any of Kafka, Kinesis, Cloud pub-sub Container Orchestration (Good to have)– Kubernetes or Swarm
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Must have skills: -Very strong coding skills on Core Java (1.5 and above) -Should be able to analyze complex code structures, data structures, algorithms/logic -Should have hands on knowledge of working on Java -Multithreading (juml)programs -Should have expertise in Java Collection framework -Must have good exposure on Struts/JSP services/Jquery/Ajax, Json-based UI rendering Good to have skills (not mandatory): -Good working knowledge on Java script/Jquery framework -Should have used HTML5/CSS5/Node.js/D3 framework in atleast one of the projects earlier -Hands on latest technologies like Cassandra, Solr, Hadoop would be an advantage -Knowledge on Graph structures would be desirable
DevOps Architect, responsible for designing & implementing the Devops related work task and clarify the System/Deployment related issue directly with customer
Work on different POC Experience in Java/J2ee programming and coding. many more ..
Strong background in Linux/Unix Administration. • Experience with CI Tools Like Jenkins etc • Experience with automation/configuration management using either Docker, Puppet, Ansible, Chef or an equivalent • Build, release and configuration management of production systems. • System troubleshooting and problem solving across platform and application domains. • Deploying, automating, maintaining and managing AWS cloud-based production system, to ensure the availability, performance, scalability and security of productions systems. • Pre-production Acceptance Testing to help assure the quality of our products / services. • Evaluate new technology options and vendor products. • Strong experience with SQL and MySQL (NoSQL experience is an addon) • Suggesting architecture improvements, recommending process improvements. • Understanding of cloud-based services, knowledge w.r.t Hosting e.g.: Amazon AWS etc Ensuring critical system security through using best in class cloud security solutions. • Ability to use a wide variety of open source technologies and cloud services (experience with AWS is an addon) • A working understanding of code and script (PHP, Python, Perl and/or Ruby) will be an addon. • Knowledge of Ant, Maven or other Build and Release tools will be an addon. • AWS: 2+ years’ experience with using a broad range of AWS technologies (e.g. EC2, RDS, ELB, EBD, S3, VPC, Glacier, IAM, CloudWatch, KMS) to develop and maintain Amazon AWS based cloud solution, with an emphasis on best practice cloud security. • DevOps: Solid experience as a DevOps Engineer in a 24x7 uptime Amazon AWS environment, including automation experience with configuration management tools. • Monitoring Tools: Experience with system monitoring tools (e.g. Nagios, Zabbix etc)
Artificial Learning Systems India Pvt. Ltd. is looking for an exceptional Python Developers who will have a good background in, and understanding of, software systems, and one who has the ability to work closely with the rest of the Engineering team from the early stages of design all the way through identifying and resolving production issues. Candidate Profile: The ideal candidate will be passionate about this role which involves deep knowledge of both the application and the product, and he/she will also believe that automation is key to operating large-scale systems. Education: BE/B.Tech. from reputed College Technical skills required: • 3+ years’ experience as a web developer in Python • Software design skills in product development • Proficiency in a modern open-source NoSQL database, preferably Cassandra • Proficient in HTTP protocol, REST APIs, JSON • Experience with Flask (Must have) Django (Good to have) • Experience with Gunicorn, Celery, RabbitMQ, Supervisor Job Type: Full time, permanent Job Location: Bangalore Who are we? Artificial Learning systems (Artelus) is a 2 year young company, working in the Deep Learning space to solve healthcare problems. The company seeks to make products, which would complement the knowledge and assist clinicians in making faster and more accurate diagnoses. Our team comprises a group of dedicated scientists trying to make the world a healthier place using the latest advances in computer science and machine learning and applying it to the field of medicine and healthcare. Why work with Artelus? We are working on exciting new scientific developments in the area of healthcare, and working with us will get you solid education whatever your level of experience. This is a very exciting opportunity for a young scientist and we look forward to working with you to help you to develop your skills in our R&D center. What does working with Artelus mean to you? • Working in a high energy and challenging environment • Work with International clients • Work in cutting edge technologies • Be a part of an exciting path breaking project • Great environment to work in
Description Auzmor is US HQ’ed, funded SaaS startup focussed on disrupting the HR space. We combine passion, domain expertise and build products with focus on great end user experiences We are looking for Technical Architect to envision, build, launch and scale multiple SaaS products What You Will Do: • Understand the broader strategy, business goals, and engineering priorities of the company and how to incorporate them into your designs of systems, components, or features • Designing applications and architectures for multi-tenant SaaS software • Responsible for the selection and use of frameworks, platforms and design patterns for Cloud based multi-tenant SaaS based application • Collaborate with engineers, QA, product managers, UX designers, partners/vendors, and other architects to build scalable systems, services, and products for our diverse ecosystem of users across apps What you will need • Minimum of 5+ years of Hands on engineering experience in SaaS, Cloud services environments with architecture design and definition experience using Java/JEE, Struts, Spring, JMS & ORM (Hibernate, JPA) or other Server side technologies, frameworks. • Strong understanding of architecture patterns such as multi-tenancy, scalability, and federation, microservices(design, decomposition, and maintenance ) to build cloud-ready systems • Experience with server-side technologies (preferably Java or Go),frontend technologies (HTML/CSS, Native JS, React, Angular, etc.) and testing frameworks and automation (PHPUnit, Codeception, Behat, Selenium, webdriver, etc.) • Passion for quality and engineering excellence at scale What we would love to see • Exposure to Big data -related technologies such as Hadoop, Spark, Cassandra, Mapreduce or NoSQL, and data management, data retrieval , data quality , ETL, data analysis. • Familiarity with containerized deployments and cloud computing platforms (AWS, Azure, GCP)
Looking for Big data Developers in Mumbai Location
APPLY LINK: http://bit.ly/2yipqSE Go through the entire job post thoroughly before pressing Apply. There is an eleven characters french word v*n*i*r*t*e mentioned somewhere in the whole text which is irrelevant to the context. You shall be required to enter this word while applying else application won't be considered submitted. ````````````````````````````````````````````````````````````````````````````````````````````````````` Aspirant - Data Science & AI Team: Sciences Full-Time, Trainee Bangaluru, India Relevant Exp: 0 - 10 Years Background: Top Tier institute Compensation: Above Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Busigence is a Decision Intelligence Company. We create decision intelligence products for real people by combining data, technology, business, and behavior enabling strengthened decisions. Scaling established startup by IIT alumni innovating & disrupting marketing domain through artificial intelligence. We bring those people onboard who are dedicated to deliver wisdom to humanity by solving the world’s most pressing problems differently thereby significantly impacting thousands of souls, everyday. We are a deep rooted organization with six years of success story having worked with folks from top tier background (IIT, NSIT, DCE, BITS, IIITs, NITs, IIMs, ISI etc.) maintaining an awesome culture with a common vision to build great data products. In past we have served fifty five customers and presently developing our second product, Robonate. First was emmoQ - an emotion intelligence platform. Third offering, H2HData, an innovation lab where we solve hard problems through data, science, & design. We work extensively & intensely on big data, data science, machine learning, deep learning, reinforcement learning, data analytics, natural language processing, cognitive computing, and business intelligence. First-and-Foremost Before you dive-in exploring this opportunity and press Apply, we wish you to evaluate yourself - We are looking for right candidate, not the best candidate. We love to work with someone who can mandatorily gel with our vision, beliefs, thoughts, methods, and values --- which are aligned with what can be expected in a true startup with ambitious goals. Skills are always secondary to us. Primarily, you must be someone who is not essentially looking for a job or career, rather starving for a challenge, you yourself probably don't know since when. A book can be written on what an applicant must have before joining a <real startup with meaningful product>. For brevity, in nutshell, we need these three in you: 1. You must be [super sharp] (Just an analogue, but Irodov, Mensa, Feynman, Polya, ACM, NIPS, ICAAC, BattleCode, DOTA etc should have been your Done stuff. Can you relate solution 1 to problem 2? or Do you get confused even when solved similar problem in past? Are you able to grasp problem statement in one go? or get hanged?) 2. You must be [extremely energetic] (Do you raise eyebrows when asked to stretch your limits, both in terms of complexity or extra hours to put in? What comes first in your mind, let's finish it today or this can be done tomorrow too? Its Friday 10 PM at work -Tired?) 3. You must be [honourably honest] (Do you tell others what you think, or what they want to hear? Later is good for sales team for their customers, not for this role. Are you honest with your work? intrinsically with yourself first?) You know yourself the best. If not ask your loved ones and then decide. We clearly need exceedingly motivated people with entrepreneurial traits, not employee mindset - not at all. This is an immediate requirement. We shall have an accelerated interview process for fast closure - you would be required to be proactive and responsive. Real ROLE We are looking for students, graduates, and experienced folks with real passion for algorithms, computing, and analysis. You would be required to work with our sciences team on complex cases from data science, machine learning, and business analytics. Mandatory R1. Must know in-and-out of functional programming (https://docs.python.org/2/howto/functional.html) in Python with strong flair for data structures, linear algebra, & algorithms implementation. Only oops cannot not be accepted. R2. Must have soiled hands on methods, functions, and workarounds in NumPy, Pandas, Scikit-learn, SciPy, Stasmodels - collectively you should have implemented atleast 100 different techniques (we averaged out this figure with our past aspirants who have worked on this role) R3. Must have implemented complex mathematical logics through functional map-reduce framework in Python R4. Must have understanding on EDA cycle, machine learning algorithms, hyper-parameter optimization, ensemble learning, regularization, predictions, clustering, associations - at essential level R5. Must have solved atleast five problems through data science & machine learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted Preferred R6. Good to have required callibre to learn PySpark within four weeks once joined us R7. Good to have required callibre to grasp underlying business for a problem to be solved R8. Good to have understanding on CNNs, RNNs, MLP, Auto-Encoders - at basic level R9. Good to have solved atleast three problems through deep learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted R10. Good to have worked on pre-processing techniques for images, audio, and text - OpenCV, Librosa, NLTK R11. Good to have used pre-trained models - VGGNET, Inception, ResNet, WaveNet, Word2Vec Ideal YOU Y1. Degree in engineering, or any other data-heavy field at Bachelors level or above from a top tier institute Y2. Relevant experience of 0 - 10 years working on real-world problems in a reputed company or a proven startup Y3. You are a fanatical implementer who love to spend time with content, codes & workarounds, more than your loved ones Y4. You are true believer that human intelligence can be augmented through computer science & mathematics and your survival vinaigrette depends on getting the most from the data Y5. You are an entrepreneur mindset with ownership, intellectuality, & creativity as way to work. These are not fancy words, we mean it Actual WE W1. Real startup with Meaningful products W2. Revolutionary not just disruptive W3. Rules creators not followers W4. Small teams with real brains not herd of blockheads W5. Completely trust us and should be trusted back Why Us In addition to the regular stuff which every good startup offers – Lots of learning, Food, Parties, Open culture, Flexible working hours, and what not…. We offer you: <Do your Greatest work of life> You shall be working on our revolutionary products which are pioneer in their respective categories. This is a fact. We try real hard to hire fun loving crazy folks who are driven by more than a paycheck. You shall be working with creamiest talent on extremely challenging problems at most happening workplace. How to Apply You should apply online by clicking "Apply Now". For queries regarding an open position, please write to email@example.com For more information, visit http://www.busigence.com Careers: http://careers.busigence.com Research: http://research.busigence.com Jobs: http://careers.busigence.com/jobs/data-science Feel right fit for the position, mandatorily attach PDF resume highlighting your A. Key Skills B. Knowledge Inputs C. Major Accomplishments D. Problems Solved E. Submissions – Github/ StackOverflow/ Kaggle/ Euler Project etc. (if applicable) If you don't see this open position that interests you, join our Talent Pool and let us know how you can make a difference here. Referrals are more than welcome. Keep us in loop.
Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment
Position Description Assists in providing guidance to small groups of two to three engineers, including offshore associates, for assigned Engineering projects Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Generate weekly, monthly and yearly report using JIRA and Open source tools and provide updates to leadership teams. Proactively identify issues, identify root cause for the critical issues. Work with cross functional teams, Setup KT sessions and mentor the team members. Co-ordinate with Sunnyvale and Bentonville teams. Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 8+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment
Bigdata, Business intelligence , python, R with their skills
Minimum 5+ years of experience as a manager and overall 10+ years of industry experience in a variety of contexts, during which you've built scalable, robust, and fault-tolerant systems. You have a solid knowledge of the whole web stack: front-end, back-end, databases, cache layer, HTTP protocol, TCP/IP, Linux, CPU architecture, etc. You are comfortable jamming on complex architecture and design principles with senior engineers. Bias for action. You believe that speed and quality aren't mutually exclusive. You've shown good judgement about shipping as fast as possible while still making sure that products are built in a sustainable, responsible way. Mentorship/ Guidance. You know that the most important part of your job is setting the team up for success. Through mentoring, teaching, and reviewing, you help other engineers make sound architectural decisions, improve their code quality, and get out of their comfort zone. Commitment. You care tremendously about keeping the Uber experience consistent for users and strive to make any issues invisible to riders. You hold yourself personally accountable, jumping in and taking ownership of problems that might not even be in your team's scope. Hiring know-how. You're a thoughtful interviewer who constantly raises the bar for excellence. You believe that what seems amazing one day becomes the norm the next day, and that each new hire should significantly improve the team. Design and business vision. You help your team understand requirements beyond the written word and you thrive in an environment where you can uncover subtle details.. Even in the absence of a PM or a designer, you show great attention to the design and product aspect of anything your team ships.